Debugging Browser Logs
When you’re debugging a web app, the most valuable clues often live in the browser console: runtime errors, warnings, network-related logs, and app-specific console.log() breadcrumbs. The problem is that most AI coding agents (Copilot Chat, etc.) only know what you explicitly paste into the chat, so the “context loop” is slow:
You reproduce the issue
You open DevTools
You copy logs
You paste into chat
You repeat
The better model is: make console logs retrievable via commands so the AI agent can “pull” the context whenever you ask, without you manually pasting anything.
To do that, you:
Launch Chrome with remote debugging enabled (Chrome DevTools Protocol / CDP).
Use a CLI tool to connect to that debugging port and stream or fetch console output.
Run both from VS Code Tasks, so they’re repeatable and easy to invoke.
Then, at any moment while you interact with your app, you can ask the agent to run the log task and interpret what it sees.
This creates a tighter feedback loop:
You focus on reproducing / interacting with the app.
The agent fetches live diagnostic context (console output) on demand.
Why this works well with AI agents
AI agents are strong at:
spotting patterns in error messages and stack traces,
correlating log sequences with app behavior,
proposing code changes with a hypothesis (“this is likely undefined because X runs before Y”).
But they need high-fidelity runtime signals. Console output is one of the best “signals” you can give them, and pulling it directly avoids:
missing lines,
copying the wrong tab,
truncation,
losing timing/context.
✏️ Hands-On Exploration
In
.vscode/tasks.json, check out the two tasks related to debugging the browser:{ "label": "Debug Chrome Browser", "type": "shell", "command": "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome", "args": [ "--remote-debugging-port=9222", "--user-data-dir=remote-debug-profile" ], "isBackground": false, "problemMatcher": [] }, { "label": "Check Browser Output", "type": "shell", "command": "npm", "args": [ "exec", "cdp-cli", "--", "--cdp-url", "http://localhost:9222", "console", "TaskFlow" ], "isBackground": false, "problemMatcher": [] },Debug Chrome Browser: launches Chrome with a CDP port open.
--remote-debugging-port=9222exposes a local debugging endpoint.--user-data-dir=remote-debug-profileisolates a clean-ish profile so you don’t collide with your normal Chrome session.
Check Browser Output: connects to CDP and reads console logs (filtered to a page title like
TaskFlow).cdp-cli … console TaskFlowattaches to the browser and prints console messages (often with optional filtering / targeting depending on the tool).
Start Chrome in debug mode
In VS Code:
Open the Command Palette (Cmd/Ctrl + Shift + P) → Tasks: Run Task
Run: Debug Chrome Browser
Optional : Ask Copilot to run this task for you
This launches a separate Chrome instance that’s “agent-observable” through the CDP port.
Tip: Use this Chrome window for your debugging session (open your app there), so all relevant console output is in the CDP-connected instance.
Use your app normally
Now reproduce the issue like you normally would:
click around,
submit forms,
navigate,
trigger the error state.
When you want context, have Copilot read the logs
At any point, you can ask Copilot (or your agent) something like:
“Run the
Check Browser Outputtask and tell me what the console shows around the latest error.”
or
“Read the browser console logs and summarize the most likely cause.”
Iterate: reproduce → pull logs → patch → verify
A good cadence looks like:
Reproduce the issue
Ask agent to pull logs
Agent identifies probable cause and proposes fix
Apply fix
Re-run scenario
Pull logs again to confirm the error is gone (or identify the next issue)
Practical tips / best practices
Name your app/tab target clearly (you’re using
TaskFlow). The clearer the target, the less noise the agent has to sift through.Log intentionally: add short, structured logs at boundaries:
route transitions
API calls + responses (careful with secrets)
key state changes
Make logs greppable: prefixes like
[auth],[api],[ui], or correlation IDs help both humans and agents.Keep the agent’s request actionable: ask for a summary + next step:
“Summarize the error + tell me which file/function to inspect first.”