Context Engineering Workshop

Context Engineering Workshop

Learn the ins and outs of context files, how to build them, and how to apply context engineering principles to real-world development workflows.

By the end, participants will:

  • Understand what makes a great context file.

  • Learn how to design, structure, and maintain context effectively.

  • Learn how to manage tokens.

  • Build and refine a copilot-instructions.md file that accurately describes their codebase.

  • Be equipped to continually evolve their context as their projects change.

Duration: ~2 hours
Audience: Engineers and technical professionals

Pre-requisites: To participate in the training, participants will need to choose a codebase they want to generate a context file for. This codebase should have past features/bugs/issues they can use to test their context file on (the AI will try to solve them using the context file).


👉 Bitovi can help you integrate this into your own SDLC workflow: AI for Software Teams

Introduction

Context is the information that gives an AI its sense of place, purpose, and direction — it tells the model what environment it’s operating in, what goals it’s pursuing, and what boundaries it must respect.

Without context, even the most capable AI behaves like a skilled but uninformed intern, producing generic results that don’t fit your systems or style. With good context, it becomes a focused collaborator that understands your domain and makes decisions aligned with your intent.

Below are five common examples that illustrate what “context” can look like in different AI scenarios:

  1. A system prompt — The foundational message that defines an AI’s role and behavior, such as “You are a financial advisor who provides concise, compliance-safe guidance.”

  2. A knowledge base or documentation set — Reference material that grounds the AI in domain facts, like API docs, product manuals, or company policies.

  3. Conversation history — The preceding messages in a chat or workflow that give the AI awareness of what’s already been discussed or decided.

  4. Structured data or schemas — Information like database models, config files, or labeled examples that help the AI understand relationships and constraints.

  5. A copilot-instructions.md file — A project-specific guide that tells an AI coding assistant how to operate within a codebase, describing architecture, naming conventions, frameworks, and style rules.

Context engineering is the discipline of designing and maintaining information so AI tools like GitHub Copilot or ChatGPT can operate effectively within your environment. It’s about shaping what the AI knows before you ever ask it to perform a task.

In this workshop, you’ll learn the fundamentals of context engineering and apply them to a real, practical challenge: building a strong copilot-instructions.md file for your own codebase. You’ll work in teams to generate an initial set of instructions using Bitovi’s custom prompt chain, then iterate and refine that file as you test Copilot on real bugs or features from your project.

Team Formation and Setup (10 min)

To kick things off, you’ll team up with the other participants working on the same codebase as you. You’ll prepare both your shared workspace and your development environment.

This setup ensures that everyone can collaborate efficiently and that Copilot is properly connected to your codebase before the main exercise begins.

✏️ Steps

  1. Create Groups Based on Codebase

    1. Participants should have selected a codebase they want to generate a context file for before the training began.

    2. Group up with others who are working on the same codebase.

      1. If there’s a large number of people working on the same codebase (10+), form subgroups.

  2. Choose a Partner

    • Find a pair programming partner that’s working on the same codebase as you.

  3. Create a Shared Collaboration Document

    • Open a shared document in Google Docs, Confluence, or another collaborative tool your team prefers.

    • Create one document for each codebase (or each codebase sub-group) using the template below:

Codebase: <link-to-codebase>

Initial Copilot Instructions

Initial version of copilot-instructions.md that one member of the group will generate for the codebase (leave this blank for now)

Pairs

A section for each programming pair, include a summary of the feature(s) being worked on and keep track of changes made to the instructions file.

Person 1 / Person 2

Features(s): <summarize the features you’re working on>

Change Log:

Final Copilot Instructions

Final version of copilot-instructions.md that combines insights from each pair.

 

  1. Prepare Your IDE Environment

    • Open VS Code.

    • Add your chosen codebase to your workspace.

    • Ensure GitHub Copilot is installed and enabled.

  2. Open Copilot in Agent Mode

    • Launch Copilot Agent Mode.

    • Verify that Copilot can see and interact with your workspace files.

  3. Run a Quick Test Prompt

    • In Copilot’s chat, run a simple check such as:

      “Summarize what this project does.”

    • Confirm that Copilot references your actual project files rather than returning generic information.

    • If Copilot doesn’t seem aware of your codebase, double-check your workspace settings before proceeding.

Once all pairs have confirmed access and collaboration setups, you’ll be ready to move on to the next section.

Copilot Instructions and Bitovi’s Prompt Chain (10 min)

In this section, we’ll talk about a very powerful context file called copilot-instructions.md. You’ll learn what it is, how it works, and your team will generate one that we’ll use in the workshop going forward.

Continue to: https://wiki.at.bitovi.com/wiki/spaces/AIEnabledDevelopment/pages/1644920875

Then return here and follow the steps below.

✏️ Steps

One member of your codebase group should follow the steps below to generate a copilot-instructions.md file that your group can use going forward in the workshop.

Bitovi's Intructions Generator

  1. Open your AI agent

  2. Run the kickoff prompt

    1. {output_folder} = .results {final_output_file} = /.github/copilot-instructions.md You are assisting with generating a {final_output_file} file using a multi-step prompt chain. 1. Open this repository on GitHub: https://github.com/bitovi/ai-enablement-prompts. 2. Navigate to the `/understanding-code/instruction-generation` folder within the repo. 3. Review all the prompt files in this folder WITHOUT executing them. - This will help you understand the full scope of the prompt chain. 4. Confirm you have a full understanding of the prompt chain sequence. 5. Once you're familiar with the flow, begin executing the prompts in numerical order: - 1-determine-techstack.md - 2-categorize-files.md - 3-identify-architecture.md - 4-domain-deep-dive.md - 5-styleguide-generation.md - 6-build-instructions.md 6. For each step, output results into a corresponding `{output_folder}/` folder. - Mirror the step’s filename e.g., `1-determine-techstack.md` > `{output_folder}/1-determine-techstack.md`. Stop ONLY when: - All `instruction-generation` steps are complete - A full `{final_output_file}` can be generated.
  3. Wait for Copilot to finish

    1. It should take ~10 minutes for Copilot to analyze the whole codebase and output the final file. While it runs, we’ll continue on with the workshop.

  4. Add Instructions to Group Document

    1. The member of the group that generated the instructions file should add it to the collaboration document once it’s done generating.

    2. This will be the starting point for everyone working on the codebase.

Core Context Engineering Concepts (30 min)

In this section, we’ll discuss the key concepts behind context engineering. We’ll explore what makes context effective and maintainable; supported by real examples.

This should supply the conceptual foundation you’ll need when refining your Copilot instructions later.

  1. https://wiki.at.bitovi.com/wiki/spaces/AIEnabledDevelopment/pages/1643970647

  2. https://wiki.at.bitovi.com/wiki/spaces/AIEnabledDevelopment/pages/1649606657

  3. https://wiki.at.bitovi.com/wiki/spaces/AIEnabledDevelopment/pages/1650819142

  4. Token Management

Copilot Instructions Iteration (50 min)

In this phase, each team member will independently test and refine their team’s copilot-instructions.md file using real examples from their codebase. The goal is to observe how Copilot performs with the current instructions, identify gaps or misunderstandings, and iteratively improve the file so Copilot becomes more effective and consistent.

✏️ Steps

  1. Select a Feature, Bug, or Issue

    • Choose one or more past items from your project’s commit history.

    • Ideally, pick something you or your team have already solved — this gives you a clear benchmark for what a successful Copilot result looks like.

    • Each team member should pick a different issue if possible (but overlap is okay if multiple people want to explore the same example).

  2. Summarize the Feature/Bug/Issue

    1. The end goal is to have Copilot attempt this change on its own. You’ll need to a short summary of what is expected of Copilot or a description of the bug that needs fixing.

    2. This can be the original ticket for the issue, or something you write up quickly.

  3. Review the Current Instructions File

    • Open your team’s generated copilot-instructions.md file.

    • Skim through it to familiarize yourself with how it describes the project’s architecture, components, and conventions.

    • Note any missing information that might confuse Copilot for your chosen task.

  4. Attempt the Task with Copilot

    • Use Copilot in Agent Mode and ask it to implement or fix the feature/bug, being sure to include the summary you compiled from step 2 — for example:

      “Using the project’s conventions, implement the fix for issue #123 that resolves the data validation bug. {SUMMARY}”

    • Observe how Copilot responds. Does it understand the architecture? Does it reference the right components? Does it make logical design decisions?

  5. Record Observations and Results

    • In your team’s shared document, under your individual section, write down:

      • What worked well (e.g., “Copilot understood the data model and used existing helpers”)

      • What didn’t (e.g., “Didn’t recognize our API structure”)

      • Any unclear or missing information in the instructions file.

    • Be specific — your notes will help the whole team improve the shared file later.

  6. Modify and Refine the Instructions File

    • Make targeted edits to copilot-instructions.md to fill in gaps or clarify misunderstood details.

    • Add or adjust sections (like architecture notes, naming conventions, or module overviews) as needed.

    • Keep the file concise — focus on improving clarity and coverage, not just adding more text.

  7. Re-Test with the Updated Instructions

    • Re-run your Copilot prompt using the modified file.

    • See if Copilot’s behavior improves — is it more accurate, aligned, or context-aware?

    • Repeat this cycle (test → observe → refine) as time allows.

  8. Maintain a Changelog of Revisions

    • In your individual notes, log each change you made to the instructions file:

      • What you changed

      • Why you changed it

      • What effect it had

    • This changelog will later help your team merge everyone’s findings into a single, final version.

Final Instructions File Synthesis (10 min)

Now that each team member has tested and refined their own version of the copilot-instructions.md file, it’s time to come back together as a group. The goal of this phase is to consolidate what everyone learned, identify the most effective improvements, and merge them into a single, unified version of your instructions file that your team can continue to use and evolve.

Steps

  1. Regroup and Share Findings

    • Rejoin your team in your shared collaboration document.

    • Each pair briefly summarizes their iteration results:

      • What issues or features they worked on

      • What Copilot did well

      • What problems or misunderstandings they encountered

      • What specific changes they made to improve the instructions

  2. Compare and Discuss Changes

    • As a team, review everyone’s changelog entries and notes.

    • Identify patterns — for example:

      • Were there recurring issues across multiple features?

      • Did certain edits (like clarifying module names or adding architecture notes) consistently help?

      • Did anyone’s changes unintentionally make Copilot worse?

    • Use this conversation to surface the edits that made the biggest positive difference.

  3. Merge the Best Improvements

    • Open a clean copy of your shared copilot-instructions.md file.

    • Collaboratively integrate the strongest insights and modifications from everyone’s iterations.

    • Remove redundancies or conflicting sections as needed.

    • Aim for clarity and completeness, not just length — your goal is a balanced, maintainable file that captures the most useful context.

  4. Document the Final Version

    • Paste or link the final version of your file into the Final Version section of your shared document.

    • Add a short summary of the key takeaways — what changed, why, and how it improved Copilot’s behavior.

    • (Optional) Include a brief changelog summary highlighting the most impactful edits for future reference.

Discussion and Wrap-Up (10 min)

A closing discussion where teams share their experiences, insights, and final results.
We’ll talk about what worked well, what surprised people about Copilot’s behavior, and what patterns emerged from refining the context.
If time allows, teams may showcase their final instruction files or lessons learned with the group.