Tree-of-Thought

Tree-of-Thought

In this step, you’ll learn how to add a quick compare-and-choose phase to your prompts. Instead of the model running with the first idea, you’ll have it propose a couple of distinct options, score them with a tiny rubric, make a visible decision, and then draft using that choice.

You’ll leave knowing exactly when to use this technique, what to ask for, and how to keep it lightweight so you get better structure without extra fluff.

Problem

Your current prompt jumps straight to a solution, but are you sure it’s the best one? More than likely there may be multiple ways to structure the output, each with it’s own trade-offs.

Your task is to modify your prompt so the AI explores 2–3 distinct output options, scores them against a simple rubric, and clearly chooses the best one before proceeding. This helps the AI reason more deliberately and produce stronger, more intentional results.

What you need to know

Tree-of-Thought (ToT) adds a quick compare-and-choose moment so the model doesn’t lock into the first idea it generates. The value comes from contrast: by asking for genuinely different options (anchored in distinct organizing principles) and scoring them with a tiny rubric, the model is forced to reason concretely about trade-offs rather than vibe its way forward.

Keep this lightweight: two or three options, a strict word cap, and a one-line decision (or a short hybrid if it’s clearly superior). The decision should be visible, and the final draft should follow the chosen outline exactly; otherwise, the evaluation step was wasted.

ToT works best when you also give enough context—audience, tone, and format—so the options are comparable and the selection feels justified.

Example

You are a project lead.

First, generate two distinct outline options for the kickoff email:

  • Option A: outcome-first (goals → timeline → roles → next steps)

  • Option B: context-first (problem → approach → risks → timeline → asks)

Evaluate each on a 1–5 scale for clarity, completeness, stakeholder relevance, and urgency.
Explain each score in one short sentence. Keep options + evaluation ≤120 words total.

Decision:
Choose the higher-scoring option; if tied, pick the clearer one for non-technical stakeholders.
State the decision in one line (e.g., “Choose B for clearer stakeholder asks.”). If a simple hybrid is clearly better, say so and use it.

How to do it (pattern).

  1. Ask for 2–3 distinct options that differ in organizing principle (not rephrases).

  2. Provide a tiny rubric (e.g., clarity, completeness, relevance, effort; 1–5 scores).

  3. Require a Decision line (and allow a short hybrid if obviously better).

  4. Draft the final output using the selected option exactly.

  5. Budget the exploration (e.g., ≤120 words for options + scoring) to keep it crisp.

When to use it.

  • When there are several reasonable ways to organize or solve the task

    • ex. email outline, documentation structure, test strategy, migration plan

  • When you want a quick justification for the chosen path without a long essay.

ToT is great for all types of use cases. In addition to generating outputs, it can also be used to help the model make decisions about what to do next. For example, if you ask the AI to solve a difficult math problem, it can try out a few ways of solving it in parallel and pick the best one. This can be repeated multiple times for larger problems, with ToT being utilized at multiple stages of the AI’s output.

Practical add-ons you can embed

  1. Make the alternatives diverse (not variations of the same idea)

Generate at least two distinct outline options that differ in organizing principle
(e.g., architecture-first vs workflow-first). Avoid trivial rewordings.

  1. Define evaluation criteria up front

Score each option on a 1–5 scale for: clarity, completeness, onboarding speed, maintainability.
Explain scores in one sentence each.

  1. Use a decision rule

Choose the option with the highest total score; if tied, prefer the clearer option for new contributors.
State the rule you used before proceeding.

  1. Budget the exploration

Limit option generation + evaluation to 120 words total. Focus on signal, not prose.

  1. Keep the decision visible

Show a short “Decision” line (Option B selected, reasons). Then draft the document using the chosen outline.

  1. Encourage hybridization, when warranted

If one option excels in clarity and another in completeness, propose a short hybrid that combines their strengths.
If the hybrid is clearly better, choose it and proceed.

  1. Constrain by audience/goal

Weight clarity and onboarding speed higher for beginner audiences; weight completeness and maintainability higher for internal experts.

Technical Requirements

✏️ Complete the following steps to add Tree-of-Thought to your prompt.

  1. Find the decision point
    Identify where multiple reasonable approaches exist (e.g., outline structure, test strategy, migration plan). That’s where ToT belongs.

  2. Ask for distinct options (not rephrases)
    Add one line that requests 2–3 clearly different options anchored in different organizing principles.

    1. Example: “Generate two outline options: A) architecture-first, B) developer-workflow-first.”

  3. Add a tiny rubric
    Tell the model how to compare options with 3–4 criteria (1–5 scale). Keep it concrete.

    1. Example: “Score each option on clarity, completeness, audience fit, effort (1–5); one short sentence per score.”

  4. Require a visible decision
    Add a “Decision:” line so the chosen path is explicit (allow a short hybrid if obviously better).

    1. Example: “Decision: Choose the highest total; if tied, pick the clearer option for new contributors. State the decision in one line.”

  5. (Optional) Impose a hard budget
    Prevent rambling so ToT stays lightweight.

    1. Example: “Keep options + scoring ≤120 words total.”

Solution

Building on our previous version of a prompt for generating an instructions.md file:

You are a senior full-stack engineer documenting a codebase for a new AI teammate.
Generate an instructions.md file that explains the purpose, architecture, dependencies, and coding conventions of the project.

Use the example outlines below to help shape your output. The final document should be detail-rich and include a variety of sections:

Good Example Outline:

  • Overview: Brief but informative project description

  • Architecture: Explanation of major components (frontend/backend, APIs, database)

  • Dependencies: Key frameworks, libraries, versions

  • Conventions: Folder structure, naming patterns, style guides

  • Setup Instructions (if relevant): Install, run, test steps

  • Common Pitfalls: Gotchas or non-obvious behaviors

Bad Example Outline:

  • Overview: "It’s a web app"

  • No architecture section

  • Lists dependencies with no context

  • Doesn’t explain folder structure or coding practices

  • No instructions or assumptions listed

---

Think step-by-step before writing the file:

  1. Identify key folders and their purpose.

  2. Determine main dependencies and frameworks.

  3. Describe how the components fit together.

  4. Write the final instructions.md file in Markdown format.

First, propose two possible outline structures for the instructions.md file — one organized by architecture, another by developer workflows.
Evaluate which outline will be clearer and more useful, then write the file using that structure.

Output Description:
The AI now generates two outline ideas, explains why one is better (“The workflow-based outline is easier for onboarding”), and then writes the final documentation.
The resulting file feels more deliberate, as if it were reviewed before being written.

What Changed

We now have:

  • Persona

  • Examples

  • Step-by-step reasoning

  • Structured self-evaluation

Our prompt is becoming a framework for thinking and choosing, not just generating.

Next Steps

Your prompt now asks the AI to try out multiple solutions before choosing what to do!

Continue to: Parameters

In the next step, you’ll make your prompt more modular and reusable by adding in parameters.

Sources

  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models — Yao et al., 2023. The original ToT paper: generalizes Chain-of-Thought to explore and evaluate multiple reasoning paths (“thoughts”) before committing. arXiv (abs + PDF). arXiv+1

  • Large Language Model Guided Tree-of-Thought — Long, 2023. Parallel introduction to the ToT framework and search over intermediate “thoughts.” arXiv. arXiv

  • Prompt Engineering Guide: Tree of Thoughts (ToT) — Practitioner-oriented summary with examples and references to Yao et al. and Long. Prompting Guide

  • Tree of Thought (ToT) Prompting — Overview — GeeksforGeeks explainer for developers; concise description and use cases (helpful as an accessible secondary reference). GeeksforGeeks

  • “Can GitHub issues be solved with Tree of Thoughts?” — 2024 study applying ToT to software tasks; compares ToT against IO prompting, CoT, and Self-Consistency on reasoning problems. arXiv. arXiv