A feature walked end-to-end
A realistic task walked end to end: pulling a work item, parallel background research, schema verification, baseline capture, implementation, and committing.
The previous five guides cover the tools in isolation. This one walks through a realistic task from start to finish using all of them together, so you can see how they interact rather than treating each one as a separate concept.
The scenario: you have been assigned a work item to add a bypass for UVW Lite vehicles in the Pending Estimates queue. You need to understand the existing queue logic, implement the change, and validate it.
Session Start
You open Claude Code in your implementation repo. Before you type anything:
- Your global
CLAUDE.mdhas loaded: WSL conventions,git.exerequirements, writing style, commit format, which databases default to test - Your project
CLAUDE.mdhas loaded: the team’s skill catalog, agent coordination rules, PR conventions,.context/location - Your memory files have loaded: feedback from prior sessions, active project context, what you were working on last
You type your first message: “Pull down work item 181646.”
Claude runs:
az boards work-item show --id 181646 --org https://dev.azure.com/adesacentral --output jsonIt parses the JSON, converts the HTML description to markdown, and writes the result to .context/181646-auto-complete-service-cert-completion.md. You did not copy anything. You did not open a browser. The work item is now available in a form Claude can reference for the rest of the session.
Research Without Polluting Context
The work item references several existing queue handlers as the pattern to follow. Before writing any code, you want to understand how those handlers work, what the database query looks like, and how the retry/DLQ infrastructure is wired up.
This is exactly what background agents are for. You spawn three at once:
Spawn background agent 1: "Read AutomaticallyExitNeedsDiagInspectionEventFunction.cs ininspection-workflow and explain the 4-trigger pattern (topic subscription, retry queue,timer-triggered DLQ handlers). Show the key class names and where each lives. Under 200 words."
Spawn background agent 2: "Find GetNeedsDiagExitEligibilityDatabaseQuery.cs in inspection-workflow.Explain the three-class-per-file pattern (Query, Result, QueryHandler) and the Dapperinterface conventions. Under 150 words."
Spawn background agent 3: "Find CsoConsumerApiClient.cs in inspection-workflow. Explain theHTTP client pattern: constructor signature, method shape, DI registration. Under 150 words."All three run in their own contexts. You keep working, reading the work item description, noting what the acceptance criteria require, checking which feature toggle is involved.
Five minutes later, three reports arrive. Your session context contains only the three summaries. Not the file reads, not the grep searches, not the inheritance chains each agent traced to get there. You have the information you need with none of the noise.
Verifying Assumptions Against the Database
The work item mentions a CARLI_ENABLED column. Before writing any SQL, verify it exists.
mcp__db-sql-server__execute_sql: query: SELECT c.name, ty.name AS type, c.is_nullable, c.column_id FROM sys.columns c JOIN sys.types ty ON c.user_type_id = ty.user_type_id WHERE c.object_id = OBJECT_ID('ASSIGN.ASSIGNMENT') AND c.name LIKE '%CARLI%' server: nexus environment: testThe column exists, is bit NOT NULL, default ((0)). The query you will write can depend on it. That verification took five seconds and will never appear in a post-mortem about a wrong column name.
Baseline
Before touching any code, you run the existing test suite and record the result:
dotnet.exe test Workflow.Function.Tests/Workflow.Function.Tests.csproj \ --no-restore -c Release \ --filter "FullyQualifiedName!~Deployment" \ --verbosity minimal778 tests passing, 0 failures. This is your anchor. If anything fails after your changes, the attribution is arithmetic: it passed before, so the change caused it.
Implementation
You start implementing. The session is clean: three 150-word agent summaries, one work item file, one schema verification result. Claude has full attention on what you are doing.
The global CLAUDE.md has already established the conventions this session will follow:
- Use
git.exefor all git operations - Single assertions per test,
BeEquivalentTo()for multi-property comparisons awaitoverTask.FromResult- No comments unless the why is non-obvious
You do not state any of this. It was loaded before you typed your first message, and it applies to every action in this session automatically.
Worth noting: the three background agents that ran during research operated under the same rules. When they identified patterns to follow and reported back, those reports already reflected the team’s conventions. The pattern they described for the HTTP client constructor, the DI registration shape they surfaced, the test structure they referenced, all of it was filtered through the same global and project context that governs your main session. The conventions do not apply only when you are typing. They apply to every agent that touches this work, from the first research pass through the final implementation.
As you work, three things happen silently in the background:
- The
session-state-counterhook increments after every tool call - Every 20 tool calls, the
session-state-injectorhook prompts Claude to write a progress snapshot to its session state file - The
block-root-repo-searchhook stands ready to deny any broad search against/mnt/c/repos/
You do not see any of this. The hooks run and either stay silent (everything is fine) or surface a message (something needs attention).
Mid-Session Research
Halfway through, you realize you need to understand how the inspection-services API endpoint is shaped, specifically the route and the expected request body for completing a service.
Another background agent:
Spawn background agent: "In ADIS-ASL-Service, find the endpoint for completing a serviceon an inspection. I need the route, HTTP method, and request body shape. CheckServicesApiClient.cs or similar. Reply in under 100 words."You keep implementing while it runs. When it comes back, it has found POST api/inspections/v1/{inspectionId}/services/{serviceId}/complete with the base URL. That is all you needed. One more targeted answer, zero additional noise in your main session.
Validation
You run the suite against your changes: 804 tests passing (778 baseline + 26 new), 0 failures. The delta is attributable. The baseline made the arithmetic trivial.
Committing
The work is done. Per your global CLAUDE.md:
- Work item prefix on the commit message:
181646 - auto-complete service on cert completion event - No AI attribution in the message or the PR body
--label SithHappenson the PRAB#181646in the PR body to link the work item
Claude composes the commit and PR using these conventions without being reminded. You review the diff, approve, and push.
What Did Not Happen
You did not:
- Open Azure DevOps in a browser to copy the work item description
- Open GitHub in a browser to check PR conventions
- Remind Claude that you use
git.exeon WSL - Re-explain the test assertion conventions
- Wait for three sequential research passes that blocked your main session
- Guess at the
CARLI_ENABLEDcolumn name and find out it was wrong after writing the query - Lose track of how many tests were passing before you started
Each of those is a small friction point. Together they add up across every session, every day. The tools in guides 1 through 5 eliminate them individually. Used together, they make the texture of working with Claude qualitatively different, less tool management, more actual work.
The next guide in this series covers agent teams: coordinating multiple teammate agents across repositories in parallel, when that model applies, and how to manage it.