Checklists
In this step, you’ll learn how to keep the AI on track with a checklist. The model will track its work as it goes, deriving a short to-do list from the task, keeping it visible at the top, and checking items off while it executes. You’ll learn when to let the AI generate the checklist vs. supplying a fixed one, and how to keep it brief, clear, and tied to the final artifact.
Problem
Your current prompt asks for multiple things, but the model may not be reliably tracking its own progress. Some parts might get skipped, others repeated, and if the output is interrupted or incomplete, you’re stuck piecing it together manually.
Your task is to revise your prompt to include a live checklist: either by asking the model to generate one from its plan or by supplying a fixed list of steps. The checklist should stay visible, update as items are completed, and make progress easy to follow.
What you need to know
Checklists for the AI agent to track its own work while it executes your request. Instead of only reasoning about the problem, the AI also derives a short task list, keeps it visible, and marks items complete as it proceeds. You can supply a fixed (static) list, or instruct the AI to generate the checklist itself from the prompt’s context (recommended, because it adapts as the prompt evolves).
A checklist turns the model’s plan into a small, visible execution tracker. It:
Keeps the AI on track. A checklist is the model’s running to-do list. It shows what’s happening now and what’s next—great for long, multi-stage prompts or when the model is performing actions (writing files, calling tools, using MCP servers).
Prevents forgetting. With the list in front of it, the model doesn’t lose the plot. It can see at a glance what’s done and what’s left, so important sections and edge cases don’t quietly vanish.
Improves transparency. You (and teammates) can see the process, not just the final artifact. The checklist makes progress and decisions visible, which makes reviews faster and more objective.
Localizes failures. If something’s off, the checklist points to the exact step that went sideways (e.g., “Testing section incomplete”). This makes it easy to fix errors in the prompt or the data the AI is operating on.
Supports resuming. If the session times out or context resets, the checklist acts like a bookmark. The model can pick up right where it left off.
Pairs with other techniques. Use Chain-of-Thought to plan, Tree-of-Thought to choose an approach, then convert that plan into a live checklist; the model updates as it executes.
Prefer an auto-generated checklist: In many cases, modern AIs will automatically create a checklist on their own and update the status as they proceed. Because the list is derived from the prompt, it naturally adapts as your prompt evolves.
Example
In many cases your AI agent will generate a checklist automatically.
If the AI isn’t doing this on its own, you can include a note in the prompt asking it to:
… Please use a checklist to track what you’re doing and what’s left, update it as you work.
You can also provide a static checklist when you need strict steps or compliance. In that case, you’re trading adaptability for consistency; you’ll need to update the steps as the prompt evolves.
Checklist: Track progress as you complete each step. Update [ ] → [✓] as you go.
[ ] Summarize the changes → Add summary to "Scope" section
[ ] Scan for risks → Add findings to "Risks" section
[ ] List must-fix issues → Add to "Must-Fix" section
[ ] Suggest improvements → Add to "Suggestions" section
[ ] Check test coverage → Note any gaps in "Tests" section
[ ] Review doc impacts → Add notes to "Docs" section
Whichever approach you choose, it helps to be explicit in the prompt: instruct the model to produce the checklist first, map each item to a section of the final artifact, mark items complete as it goes ([ ] → [✓]), and add new items if it discovers gaps. Keep the checklist text brief to avoid bloat.
How does this differ from Chain-of-thought?
Chain of Thought is about reasoning before doing: a brief plan that surfaces what matters, the approach, and any assumptions. It’s typically a transient scaffold—once the plan shapes the answer, it can disappear. A checklist, by contrast, is execution tracking: a living list the model maintains while it works. It persists alongside the artifact, shows progress ([ ] → [✓]), and makes it obvious where something stalled or went missing.
Think of CoT as the why and in what order, and the checklist as the what’s done and what’s left.
Technical Requirements
✏️ Complete the following steps to get a checklist working with your prompt
Check to see if the AI does this automatically
In many cases, AI agents will create a checklist on their own to keep track of their progress. If this is the case, observe how the process unfolds and note any areas for improvement
(Optional) Ask the AI to generate a checklist explicitly
If your AI agent isn’t doing this automatically, or it’s not up to your standards, ask it to create a simple checklist to keep track of its work.
“…please use a checklist to track what you’re doing and what’s left, update it as you work.”
(Optional) Define your own checklist for the AI to follow
If you have a particular process you want the AI to follow, explicitly define it in the prompt and ask the AI to keep track of its progress as it goes.
Solution
We take everything we’ve built so far with our example instructions.md prompt (persona, few-shot examples, chain-of-thought, tree-of-thought, parameters/guardrails) and add explicit process tracking with an auto-generated checklist:
Parameters:
{{CODEBASE_PATH}}: path to the repository
{{OUTPUT_FORMAT}}: markdown or JSON
{{TONE}}: professional or concise
{{WORD_LIMIT}}: maximum 500 wordsYou are a senior full-stack engineer documenting a codebase for a new AI teammate.
Generate aninstructions.mdfile that explains the purpose, architecture, dependencies, and coding conventions of the project.Use the example outlines below to help shape your output. The final document should be detail-rich and include a variety of sections:
Good Example Outline:
Overview: Brief but informative project description
Architecture: Explanation of major components (frontend/backend, APIs, database)
Dependencies: Key frameworks, libraries, versions
Conventions: Folder structure, naming patterns, style guides
Setup Instructions (if relevant): Install, run, test steps
Common Pitfalls: Gotchas or non-obvious behaviors
Bad Example Outline:
Overview: "It’s a web app"
No architecture section
Lists dependencies with no context
Doesn’t explain folder structure or coding practices
No instructions or assumptions listed
---
Think step-by-step before writing the file:
Identify key folders and their purpose.
Determine main dependencies and frameworks.
Describe how the components fit together.
Write the final
instructions.mdfile in Markdown format.First, propose two possible outline structures for the
instructions.mdfile — one organized by architecture, another by developer workflows.
Evaluate which outline will be clearer and more useful, then write the file using that structure.Process tracking (auto-generated checklist):
Derive a concise checklist (3–7 items) from your reasoning, chosen outline, and step-by-step thinking process defined above.
Keep this checklist visible at the top of your output and mark items complete as you finish them.
If you discover a missing task, add it and continue.
Output the result as
{{OUTPUT_FORMAT}}, in a{{TONE}}tone, and within the{{WORD_LIMIT}}limit.
Begin by analyzing the codebase at{{CODEBASE_PATH}}.
Output Description:
The AI now begins with a short checklist (e.g., “Identify folders ✓, List dependencies ✓, Describe workflows ✓, Summarize conventions ✓, Draft final file ✓”), visibly updates it as steps complete, and then returns the final instructions.md.
What Changed
The model now converts its plan into a maintained checklist and shows progress.
Completeness and transparency improve: missed sections are rare, and if the model discovers a new need mid-flight, it adds a task and checks it off.
The prompt remains modular and enforceable (parameters/guardrails), but is now process-aware, yielding more reliable, auditable outputs.
Next Steps
Your prompt is now more consistent and procedural thanks to checklists!
Continue to: Multi-Stage Prompts (Composition)
In the next step, you’ll break your prompt into multiple sub-prompts that can be chained together for even more flexibility.