Spec Kit - Analysis

Spec Kit - Analysis

Spec Kit provides both a methodology and a toolset (CLI, conventions, templates) to implement SDD. Typical workflow goes something like this:

  1. /constitution — Define the project’s foundational principles or non-negotiables (coding standards, architectural constraints, guiding philosophies). This sets the guardrails before anything else.

  2. /specify — Write a high-level spec: user stories, functionality, success criteria, constraints (performance, privacy, UX), what’s in scope vs out-of-scope, etc. Focus on what and why.

  3. /plan — Translate the spec into a technical plan: architecture, tech stack, dependencies, integration points, data models, overall system design. This is where “how” gets defined (but still before actual code).

  4. /tasks — Break down the plan into actionable, discrete tasks. These tasks correspond to small units of work that can be implemented (and tested) independently. Spec Kit automatically generates this list based on spec + plan.

  5. /implement — At this point, you (or an AI coding agent) implement the tasks — the code is generated against the spec & plan. Because tasks are linked to spec & plan, code stays aligned to initial intent.

  6. Testing + Validation (built in) — As part of task generation, Spec Kit includes test-related tasks (unit tests, contract tests, performance/security checks) to ensure implementation fulfills spec requirements. This is more like TDD + QA baked into the workflow.

  7. Maintain / Evolve — If requirements change, you update the spec, then re-run /plan, /tasks, etc — letting the spec drive updates, not letting code drift arbitrarily without traceability.

So effectively: spec → plan → tasks → implementation → tests → maintain.

It can interface with most modern AI agent platforms, including Copilot and Claude Code.

Setup

  • Installation

    • uv tool install specify-cli --from git+https://github.com/github/spec-kit.git

  • Initialize in existing project

    • specify init .

    • You can choose which AI you want to use and also select the type of shell (POSIX, powershell, etc)

Once it’s initialized with Copilot, a bunch of prompt and agent files are added to the .github directory:

There is also a dedicated .specify folder with templates for different types of artifacts the AI will create, bash scripts which help orchestrate the SDD process, as well as a constitution.md file which acts as a source of truth and alignment similar to copilot-instructions.md.

1. Constitution

The first step in setting things up after initialization is to fill out the .specify/memory/constitution.md file. This provides a kind of “source of truth” reference and alignment on how code should be written in the project. This is similar in theory to copilot-instructions.md, but it is more specifically tailored to the spec-driven development process.

There is a dedicated prompt for creating this with Copilot.

/speckit.constitution look in .github/copilot-instructions.md for information about how code should be written in the project. Use that to fill out the constitution

For each Spec Kit prompt, there is a corresponding agent with hard-coded directions and context for how best to perform the task. You can chat with the agent on an ongoing basis to refine each step of the SDD process.


description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
handoffs:

  • label: Build Specification
    agent: speckit.specify
    prompt: Implement the feature specification based on the updated constitution. I want to build...


User Input

$ARGUMENTS

You MUST consider the user input before proceeding (if not empty).

Outline

You are updating the project constitution at .specify/memory/constitution.md. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. [PROJECT_NAME], [PRINCIPLE_1_NAME]). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.

Follow this execution flow:

  1. Load the existing constitution template at .specify/memory/constitution.md.

    • Identify every placeholder token of the form [ALL_CAPS_IDENTIFIER].
      IMPORTANT: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.

  2. Collect/derive values for placeholders:

    • If user input (conversation) supplies a value, use it.

    • Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).

    • For governance dates: RATIFICATION_DATE is the original adoption date (if unknown ask or mark TODO), LAST_AMENDED_DATE is today if changes are made, otherwise keep previous.

    • CONSTITUTION_VERSION must increment according to semantic versioning rules:

      • MAJOR: Backward incompatible governance/principle removals or redefinitions.

      • MINOR: New principle/section added or materially expanded guidance.

      • PATCH: Clarifications, wording, typo fixes, non-semantic refinements.

    • If version bump type ambiguous, propose reasoning before finalizing.

  3. Draft the updated constitution content:

    • Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).

    • Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.

    • Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.

    • Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.

  4. Consistency propagation checklist (convert prior checklist into active validations):

    • Read .specify/templates/plan-template.md and ensure any "Constitution Check" or rules align with updated principles.

    • Read .specify/templates/spec-template.md for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.

    • Read .specify/templates/tasks-template.md and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).

    • Read each command file in .specify/templates/commands/*.md (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.

    • Read any runtime guidance docs (e.g., README.md, docs/quickstart.md, or agent-specific guidance files if present). Update references to principles changed.

  5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):

    • Version change: old → new

    • List of modified principles (old title → new title if renamed)

    • Added sections

    • Removed sections

    • Templates requiring updates (✅ updated / ⚠ pending) with file paths

    • Follow-up TODOs if any placeholders intentionally deferred.

  6. Validation before final output:

    • No remaining unexplained bracket tokens.

    • Version line matches report.

    • Dates ISO format YYYY-MM-DD.

    • Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).

  7. Write the completed constitution back to .specify/memory/constitution.md (overwrite).

  8. Output a final summary to the user with:

    • New version and bump rationale.

    • Any files flagged for manual follow-up.

    • Suggested commit message (e.g., docs: amend constitution to vX.Y.Z (principle additions + governance update)).

Formatting & Style Requirements:

  • Use Markdown headings exactly as in the template (do not demote/promote levels).

  • Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.

  • Keep a single blank line between sections.

  • Avoid trailing whitespace.

If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.

If critical info missing (e.g., ratification date truly unknown), insert TODO(<FIELD_NAME>): explanation and include in the Sync Impact Report under deferred items.

Do not create a new template; always operate on the existing .specify/memory/constitution.md file.

The constitution file is designed to be updated as the project evolves. The AI can also update spec file templates .specify/templates/* when they conflict with information in the constitution. To update the constitution file simply ask the agent:

/speckit.constitution update the file...

2. Spec Creation

The next step is to create a spec based off of a plain english description. Here you can use the /speckit.specify prompt/agent. The more information and detail you can provide here the more accurate the corresponding spec will be. It’s best to focus on the what and why not the technical details.

/speckit.specify we need a page for each user

This will create a new folder for the spec in the specs folder at the root of the project. Each spec contains a requirements checklist and spec.md file.

Once the spec is written, Copilot chat will automatically prompt you with next steps, including the option to clarify spec requirements.

If you choose to clarify requirements /speckit.clarify, Copilot will ask you additional questions about ambiguous or ill defined parts of the spec. This is a good way to surface areas in the spec that might require more thought or attention on the part of the user.

In the prompt above, the agent list out multiple choices for how to proceed with the clarification.

3. Build a Technical Plan

Once the spec if outlined and we know what the new feature looks like at a high level, it’s time to come up with a technical plan for implementing things in the codebase. Use the speckit.plan prompt and provide it with specific directions around technical details (frameworks, libraries, patterns, etc.).

If your using an existing project, you can have the agent look at the existing code, or consult copilot-instructions.md.

/speckit.plan Plan the implementation of the ticket based on the technology currently present in the project.

The agent will do research, develop a data model as well as other documents like a apis or tests specs.

These files provide the technical backbone and constraints for the eventual implementation.

4. Create Tasks

Now it’s time to break down the spec into discrete task that the Agent can work on.

/speckit.tasks

This breaks the implementation down into discrete parts, often as simple as Create xyz file or Handle 404 state.

Analyzing for Consistency

This is often times a good place to analyze the tasks for consistency. The tasks are the closest artifacts to the actual implementation so ensuring the tasks are right is very important. The Agent can analyze each task and map it to the spec, all while keeping the constitution in mind.

If there are critical issues, they can be fixed on the spot. Its best to fix things BEFORE the code is written and often times simple bugs will surface here.

Implementation

Run the implementation prompt to convert the specs/tasks to code.

/speckit.implement

The Agent will run through the tasks and implement each one. If the project supports tests it will also write the tests and try to run them. If tests fail it tries to fix them and keeps iterating until they pass.

Conclusion

Overall Spec Kit is very well organized, opinionated and flexible. I was able to get it to implement a simple feature in the app. For what it’s trying to do, it does it very well. I also like that everything is “agent-tized”, meaning you can have conversations about each step i the process with the agent, and at any point you can have the agent check that everything in a good state.

Pros

  • Clear, Standardized SDD Framework
    Provides a structured, step-by-step approach to specification-driven development.

  • Fully Agent-Driven Workflow
    Each stage is conversational, letting you collaborate with an “expert” agent throughout the process.

  • Useful Templates & Constitution
    Encourages consistency and clear project principles via built-in templates and the constitution document.

  • Strong Clarification Loop
    AI proactively asks clarifying questions, improving accuracy and reducing ambiguity.

  • Easy to Adopt Within Existing Projects
    Can be integrated incrementally without needing a full project restructure.

Cons

  • Tightly Coupled to GitHub
    Heavily dependent on GitHub’s ecosystem and evolving AI tooling, reducing portability.

  • Overkill for Small Features
    The multi-step workflow (constitution → spec → plan → tasks) adds significant ceremony that slows down small or quick changes.

  • Redundant Documentation
    Different stages often repeat the same information, especially with AI-generated content.

  • High Maintenance & Version Control Complexity
    Managing multiple interlinked documents can be time-consuming, and diffs/merges can get messy.

  • Early-Stage Tooling
    APIs and workflows are likely to shift, meaning instability and potential rework.

  • No built in Figma integration for writing specs
    Unlike in the Bitovi version, there doesn’t seem to be a way of using the Figma designs to inform the specs. It should be possible to use Figma MCP in the implementation phase though.

 


Notes

IF there’s figma files connected, go to the figma MCP, etc and get the 

If it’s best practice to change the prompt setp you can modify the prompt “put a line taht says if a jira link is provided, go use the atlassian MCP to write up the requirements”

Install speckit nad add small modifications

They should just be pasting in teh like to jira, they will skip the requirements step