Basics of GitHub Copilot — Workshop

Basics of GitHub Copilot — Workshop

A practical introduction to GitHub Copilot for anyone looking to start using it for coding.

Pre-Requisites:

  • GitHub Copilot enabled on your account — check GitHub settings

  • Latest version of VS Code — required for many newer Copilot features

  • The GitHub Copilot extension installed in VS Code

  • A Codebase you want to use to practice talking to Copilot.

  • Copilot Chat working — open a project, open the chat window, and try: "What is the name of the codebase I have open?"

Duration: ~2 hours

https://docs.google.com/presentation/d/1K-QU4tRBbnq5UH7Z5jlppUS8ALrbZBTXY-RyCYBnqfs/edit?usp=sharing

Welcome to GitHub Copilot Basics! This training takes about two hours to go through, and it's designed so you can either follow along in a live session or work through it on your own at your own pace. There are two hands-on exercises built in so you can practice as you go.

Here's what we're going to cover:

  • First half: What LLMs and AI agents are, and how GitHub Copilot works — models, modes, tools, and context

  • Second half: More advanced workflows like running multiple agents and connecting Copilot to external services with MCP

The goal here isn't just to learn Copilot — it's to learn how to think in terms of AI agents. Once you've got that down, you'll be able to pick up any of the other tools out there (Cursor, Claude Code, Windsurf, etc.) pretty easily, because they all work on the same ideas.

Alright, let's get into it.


Workshop Sections

Work through each section in order and complete the exercises as you go!

 

LLMs vs AI Agents — How LLMs work as next-token prediction machines and how AI agents use them as a brain to take real actions.

Copilot Basics: Models & Modes — How to pick a model, understand premium request costs, and use Ask/Plan/Agent mode to design, plan, and build a game from scratch.

✏️ Exercise 1 - Refactor a File — Practice the full Ask → Plan → Agent workflow on your own codebase by having Copilot refactor a real file.

Multi-Agent Workflows — How to run multiple Copilot chat windows simultaneously and use subagents to parallelize work automatically.

Model Context Protocol (MCP) — How to extend Copilot's toolbox by installing MCP servers that connect it to external services, plus security best practices.

✏️ Exercise 2 - Connect an MCP Server — Install and configure an MCP server of your choice and use one of its tools in a chat.


Wrapping Up

That's a wrap on GitHub Copilot Basics! Here's a quick recap of everything we covered:

  • What LLMs are and how they work — tokens, vectors, next-token prediction

  • What AI agents are and how they use LLMs as a brain

  • The three Copilot modes — Ask, Plan, and Agent — and when to use each

  • How to choose models and manage your premium request budget

  • How tools work and how to control which ones Copilot can access

  • Adding context through ghost text, code highlighting, drag-and-drop, and checkpoints

  • Running multiple agents simultaneously and using subagents for parallel work

  • Extending Copilot's capabilities with MCP servers

Everything here applies broadly — the mental model transfers to Cursor, Claude Code, Windsurf, and any other agent you pick up down the road.

Thanks for following along!

Content Index

Core Concepts

Concept

Summary

Section

Concept

Summary

Section

LLM (Large Language Model)

The core AI technology — a next-token prediction machine trained on massive amounts of text.

LLMs vs AI Agents

Token

The unit an LLM actually predicts — smaller than a word, a chunk of text that maps to a number.

LLMs vs AI Agents

Vector / Embedding

How text is converted into numbers and placed in a high-dimensional semantic space so the model understands relationships between concepts.

LLMs vs AI Agents

Next-token prediction

The fundamental mechanism of LLMs — given some input text, predict the most likely next token.

LLMs vs AI Agents

AI Agent

A software layer built on top of an LLM that gives it "hands" — the ability to read/write files, run commands, call APIs, and take real actions.

LLMs vs AI Agents


Copilot Models & Configuration

Concept

Summary

Section

Concept

Summary

Section

Model Picker

The dropdown at the bottom of the chat to choose which AI model Copilot uses. Defaults to "Auto".

Copilot Basics: Models & Modes

Auto mode (model)

Copilot selects the model for you — comes with a small cost discount. Good default for most tasks.

Copilot Basics: Models & Modes

Premium Requests

The usage budget on most Copilot plans. Each model has a multiplier that determines how many premium requests each message consumes.

Copilot Basics: Models & Modes

Model Multiplier

A cost signal per model — free, 1x, fractional, or up to 10x — that also roughly indicates capability.

Copilot Basics: Models & Modes

Thinking Effort

A low/medium/high toggle on some models that controls how much reasoning the model does before responding.

Copilot Basics: Models & Modes


Copilot Modes

Concept

Summary

Section

Concept

Summary

Section

Ask Mode

Conversational-only mode — no file edits or commands. Best for brainstorming, exploring ideas, and building shared understanding before writing code.

Copilot Basics: Models & Modes

Plan Mode

Optimized for creating implementation plans. Has its own tool set tuned for research and step-by-step planning. Best used with a powerful model.

Copilot Basics: Models & Modes

Agent Mode

Full-power mode — Copilot can edit files, run terminal commands, execute tests, spin up servers, and orchestrate multi-step work.

Copilot Basics: Models & Modes

Ask → Plan → Agent workflow

The recommended sequence: design in Ask, plan in Plan, implement in Agent. Produces better results than jumping straight to implementation.

Exercise 1 - Refactor a File


Tools & Context

Concept

Summary

Section

Concept

Summary

Section

Tools

Built-in capabilities Copilot can invoke (fetch, file edit, terminal, etc.). Available tools vary by mode. Viewable via the tools icon in the chat.

Copilot Basics: Models & Modes

Fetch Tool

Lets Copilot browse the web. Available in Ask mode. Reference it explicitly with #fetch for reliability.

Copilot Basics: Models & Modes

# tool reference

Explicitly invoke a specific tool in the chat by typing #toolname — more reliable than letting Copilot guess.

Copilot Basics: Models & Modes

Approvals Dropdown

Controls when Copilot asks for confirmation before taking actions: Default (Copilot decides), Bypass (auto-approve), or Autopilot (fully autonomous).

Copilot Basics: Models & Modes

Ghost Text

Inline code completions that appear as you type in the editor. Press Tab to accept.

Copilot Basics: Models & Modes

Code Highlighting

Select lines in the editor to automatically pin them as context in the chat window.

Copilot Basics: Models & Modes

Drag and Drop Files

Drag files or images from the file explorer directly into the chat to add them as context.

Copilot Basics: Models & Modes

Context Window Indicator

A UI element (bottom-right of chat) showing how full Copilot's memory is. Start a new chat when it gets too full.

Copilot Basics: Models & Modes

Checkpoints

Every edit Copilot makes is tracked as a checkpoint. You can restore any earlier checkpoint to roll back all subsequent file changes.

Copilot Basics: Models & Modes

Conversation History

Every message includes the full chat history, so Copilot remembers everything discussed — the more you flesh out in Ask mode, the better it implements later.

Copilot Basics: Models & Modes


Multi-Agent Workflows

Concept

Summary

Section

Concept

Summary

Section

Multiple Chat Windows

Open several independent Copilot chat windows side by side — each with its own mode, model, and context — all scoped to the same project.

Multi-Agent Workflows

Subagents

A built-in tool (#run-subagent) in Agent/Plan mode that lets Copilot spin up and orchestrate its own internal agents to run tasks in parallel.

Multi-Agent Workflows

Isolated Context Window

Each subagent runs in its own context — it only knows what the main agent passes it. Intermediate thinking is discarded after it returns its result.

Multi-Agent Workflows

Manual multi-agent

You orchestrate multiple chat windows yourself — good for a small number of independent tasks running in parallel.

Multi-Agent Workflows

Automated multi-agent

Ask Copilot to use subagents and it handles all orchestration internally — good for coordinating everything toward a single goal.

Multi-Agent Workflows


Model Context Protocol (MCP)

Concept

Summary

Section

Concept

Summary

Section

MCP (Model Context Protocol)

A standard interface for extending Copilot's toolbox by connecting it to external services, APIs, databases, or platforms.

Model Context Protocol (MCP)

MCP Server

A service that exposes tools to Copilot via the MCP standard. Can be installed globally or per-workspace (mcp.json).

Model Context Protocol (MCP)

MCP Marketplace

A built-in browser in Copilot's chat settings for discovering and installing pre-built MCP servers.

Model Context Protocol (MCP)

mcp.json

The workspace-level config file where MCP server configurations are stored when installed per-project.

Model Context Protocol (MCP)

Global vs Workspace Install

MCP servers can be installed globally (available in all projects) or scoped to a single workspace via mcp.json.

Model Context Protocol (MCP)

Deny by Default (MCP security)

Best practice: uncheck all tools when installing a new MCP server, then only enable the specific ones you need — keeps the agent's access surface small.

Model Context Protocol (MCP)

Input Variables

A VS Code feature for referencing secrets (API keys, tokens) in mcp.json without hardcoding them — especially important if the file is in source control.

Model Context Protocol (MCP)

MCP Authentication

MCP servers may require auth tokens or OAuth flows. VS Code provides help for servers in the marketplace.

Exercise 2 - Connect an MCP Server