Chain-of-Thought

Chain-of-Thought

This step is about adding a thinking/planning phase to your prompts. You’ll learn how to get the model to slow down, outline its approach, and surface assumptions before it writes the final answer. By the end, you’ll know exactly how to ask for a brief plan, when to require it, and when to skip or shrink it.

The goal: fewer skipped steps, fewer shaky claims, and outputs you can trust because you saw the model thinking.

Problem

Right now, your prompt tells the AI what to do but not how to think through it. The model jumps straight to the answer, potentially skipping steps or missing key details.

Your task is to add a short reasoning phase to your prompt — a quick plan or thought process — before the AI produces its final output. This will help it slow down, reason through the task, and make its logic visible so you can spot mistakes early.

What you need to know

Chain-of-thought (CoT) is like asking the model to use scratch paper before it answers. Most models try to “jump to the end” because they’re trained to autocomplete. CoT tells them to lay out a short plan first—identify what matters, decide on an approach, and only then produce the final result. Think of it as adding a lightweight planning/thinking phase in front of the answer.

Why this helps:

  • Completeness: a quick plan reduces skipped steps and missing sections.

  • Accuracy: the model must justify key inferences, which discourages hand-waving.

  • Transparency: you can see the plan, spot bad assumptions, and correct course early.

  • Teachability: the plan becomes a small rubric you can refine (“add edge cases,” “cite sources”).

The simplest form of CoT is to simply ask the model to “Think through what you’re going to do before you start working”. This will force it to slow down and be more explicit about it’s process.

Example

In the prompt below, we ask the AI to perform a code review, and use CoT to define the steps.

You are a senior reviewer writing a review for a new feature that’s been added to the codebase.

First, write a brief plan of action (don’t include this in the final output), 3–5 bullets, ≤60 words, highlighting:

  • What changed

  • Biggest risks

  • Tests/docs impacted

  • How you’ll prioritize must-fix vs should-fix

Then write the review.
Be specific; cite files/lines when helpful.

In this example, CoT is used before the review to build up meaningful context about the codebase and outline a plan of action. With that context, the AI can review the code in a more structured and accurate way.

We’re also asking the AI to cite files/lines which add another layer of “thinking” to the process.

There is an art to writing good CoT instructions, but in general, they should be focused, concise, and help the AI to reason and think about the problem at hand.

Practical add-ons you can embed

All of the techniques mentioned below can increase the quality and accuracy of your output. They’re especially useful for avoiding hallucinations, where the model outputs something that seems right, but isn’t actually correct.

  1. Plan → Draft separation
    Ask for a named “Plan” section first, then the “Draft” artifact. Keep the plan tight.

Before drafting, produce a brief “Plan” (3–6 bullets) summarizing the repo’s structure,
dependencies, workflows, and the chosen documentation outline. After the plan, produce the final instructions.md.

  1. Assumptions & unknowns
    Force the model to surface what it’s guessing vs what it knows.

List explicit assumptions or unknowns (if any). If a detail is unclear, state how you’re inferring it.

  1. Sanity checks / verification
    Add a quick self-check against the task requirements.

After drafting, verify that all required sections are present and consistent (overview, architecture,
dependencies, workflows, conventions, testing, setup, notes). If any is missing or thin, revise once.

  1. Evidence-directed reasoning
    Anchor reasoning to concrete artifacts (paths, filenames, config keys).

When possible, cite the file/folder that supports each inference (e.g., /api/, next.config.js, package.json).

  1. Bounded reasoning
    Avoid runaway verbosity by capping reasoning length.

Keep your “Plan” under 120 words. Prioritize correctness and coverage over prose.

When to dial CoT up or down

  • Dial up when tasks require inference, multi-step structure, or risk trade-offs (e.g., repo analysis, migration planning).

  • Dial down when the task is straightforward and you want lean output (e.g., formatting a known schema). Replace full CoT with a mini-plan or just a checklist.

Technical Requirements

✏️ Complete the following steps to get your prompt using Chain-of-Thought:

  1. Ask the model to “think” about what it’s doing

    1. This is the simplest form of CoT, you just ask the model to “think” without going into detail.

      1. “… before you start working, think about what you’re going to do…”

    2. Observe what it thinks about without explicit direction and note what you like/dislike.

  2. Develop a step-by-step plan

    1. Come up with a series of steps the AI should take to think and reason about the output of your prompt.

    2. Imagine what you would do in the process of solving the problem and apply that to things the AI can do (think about x, analyze y, draft a simple plan about z, etc).

  3. (Optional) Apply an advanced technique

    1. Apply one of the techniques mentioned in the “Practical add-ons” section above

      1. Chain-of-Thought | Practical add ons you can embed

Solution

Continuing on with our instructions.md example, we keep the persona and few-shot intact, then add reasoning steps:

You are a senior full-stack engineer documenting a codebase for a new AI teammate.
Generate an instructions.md file that explains the purpose, architecture, dependencies, and coding conventions of the project.

Use the example outlines below to help shape your output. The final document should be detail-rich and include a variety of sections:

Good Example Outline:

  • Overview: Brief but informative project description

  • Architecture: Explanation of major components (frontend/backend, APIs, database)

  • Dependencies: Key frameworks, libraries, versions

  • Conventions: Folder structure, naming patterns, style guides

  • Setup Instructions (if relevant): Install, run, test steps

  • Common Pitfalls: Gotchas or non-obvious behaviors

Bad Example Outline:

  • Overview: "It’s a web app"

  • No architecture section

  • Lists dependencies with no context

  • Doesn’t explain folder structure or coding practices

  • No instructions or assumptions listed

---

Think step-by-step before writing the file:

  1. Identify key folders and their purpose.

  2. Determine main dependencies and frameworks.

  3. Describe how the components fit together.

  4. Write the final instructions.md file in Markdown format.

Output Description:
Before writing, the AI now lists out what it sees — “Found folders: api, components, lib…” — then proceeds to write a complete, structured documentation file.
It’s visibly reasoning through the task.

What Changed

The AI’s output becomes both accurate and transparent.
If it misunderstands something, you can see where its reasoning went wrong.

We now have:

  • Persona

  • Few-shot examples

  • Step-by-step reasoning

Next Steps

Your prompt now asks the AI to think think through what it’s doing and generate relevant context before it outputs a result!

Continue to: Tree-of-Thought

In the next step, you’ll modify your prompt to ask the AI to try out multiple solutions before deciding what to do.

Sources

  • Wei et al. (2022) — “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.”
    The seminal CoT paper: shows that giving step-by-step exemplars greatly boosts reasoning on arithmetic, commonsense, and symbolic tasks; includes headline GSM8K results with PaLM 540B. arXiv+1

  • Kojima et al. (2022) — “Large Language Models are Zero-Shot Reasoners.”
    Introduces Zero-shot CoT (“Let’s think step by step”), demonstrating strong gains without few-shot exemplars; NeurIPS version available. arXiv+1

  • Zhou et al. (2022) — “Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.”
    A related decomposition strategy (solve easier subproblems first), often taught alongside CoT to explain planning/stepwise reasoning. arXiv+2OpenReview+2

  • Lecture/overview slide deck on CoT (University of Toronto).
    Short academic overview summarizing CoT’s decomposition, interpretability, and applicability across domains. Useful as a concise explainer slide. cs.toronto.edu

  • Accessible industry write-up summarizing CoT.
    Practitioner-oriented explainer emphasizing transparency and debugging benefits of CoT in real workflows. DZone