Platform workflow
A PBI-driven workflow built on project-local skills in Wholesale-Platform-Team — create, refine, implement, iterate on pipelines, open PRs, and review, all without leaving the terminal.
The Platform workflow is a set of project-local skills in the Wholesale-Platform-Team repo that cover the full lifecycle of a PBI — from creation through implementation to PR review. You open Claude Code in the team repo, and the skills handle the ADO plumbing, research, branching, CI iteration, and review mechanics so you stay focused on the actual work.
When to reach for it
- You’re starting a new piece of work and need a PBI created with the right fields, parented under a Feature, and linked to a PR.
- You have a sparse PBI that needs research and refinement before anyone can implement it.
- You’re ready to implement — you want the PBI fetched, the context researched, a feature branch created, and the work tracked through to a PR.
- You need to review a teammate’s PR with blast radius analysis, platform convention checks, and API/UI verification.
Getting set up
Clone the team repo and open Claude Code from it:
git clone https://github.com/CVNA-Wholesale/Wholesale-Platform-Team.gitcd Wholesale-Platform-TeamclaudeThe repo ships with everything wired up:
CLAUDE.md— team conventions, ADO defaults (project:Adesa, area path:ADESA\Platform Team, iteration:ADESA\2026), PR workflow rules, and pipeline naming standards..mcp.json— connects the Platform Graph MCP (for dependency queries and impact analysis) and Slack MCP (for channel context and PR monitoring)..claude/skills/— the six project-local skills described below..claude/agents/— three specialized agents for cross-repo planning, dependency auditing, and repo reconnaissance.platform-kb/— knowledge base files includingKNOWLEDGE-BASE.md(overview of all 70 Platform Team repos),DEPENDENCY-GRAPH.md(mermaid diagrams of repo relationships), andyaml-arm-cross-reference.md(file-level mapping of yaml-templates to azure-arm-templates).
You also need the platform plugin installed (/plugin install platform) for pipeline monitoring, diagnostics, and the other infrastructure skills that complement this workflow.
The PBI lifecycle
Step 1: Create — /create-pbi
/create-pbi Add container job support to sandbox manifestThe skill collects everything ADO needs for a well-formed PBI:
- Title and description — if you give a brief description, it collaborates with you to flesh it out into a proper write-up covering what, why, how, and scope.
- Acceptance criteria — specific, testable conditions written as verifiable statements.
- Effort — Fibonacci sizing (1, 2, 3, 5, 8, 13).
- Parent Feature — every PBI must be parented under a Feature (an ADO PR check enforces this). The skill queries active Features and presents options.
- Flow Item Type —
Feature,Defect,Debt,Risk, orAutomation PR. - Board column — defaults to
New, but can place the PBI on any board column includingDesign,On Deck, orDelayed(AKA BLOCKED)by setting the right combination of ADO state and WEF Kanban field.
After creation, it parents the PBI under the selected Feature and optionally links it to a GitHub PR via AB#<PBI_ID> in the PR body.
Step 2: Refine — /refine-pbi
/refine-pbi 184523Takes a sparse PBI and does deep research before asking you anything:
- Fetches the PBI and its parent Feature from ADO.
- Identifies repos and code — parses the description for repo names, queries the platform graph, greps across local and remote repos.
- Maps dependencies and blast radius — runs
impact_analysisandfile_impact_analysisagainst the platform graph to understand what could break. - Researches external docs — if the PBI involves an external service or API, it searches for official documentation and collects URLs.
- Finds existing patterns — looks for similar implementations in the codebase, checks for open PRs or in-flight work.
- Presents findings — structured summary of repos involved, key files, dependencies, existing patterns, and external docs found. Only then does it ask clarifying questions — things tools couldn’t answer.
After alignment, it updates the PBI with a structured HTML description covering Goal, Background, Approach, Acceptance Criteria, Key References (with hyperlinks to every external doc it found), and Notes. The PBI becomes the canonical source of truth — detailed enough that anyone on the team could pick it up.
Step 3: Implement — /work-pbi
/work-pbi 184523This is the workhorse. It follows the same research-first approach as refine, but then goes all the way through implementation and PR submission:
Research phase — fetches the PBI and parent Feature, queries the platform graph for dependencies, reads the relevant code, checks for open PRs and existing branches. Presents findings and asks only the questions it couldn’t answer through tools.
Plan and align — once you confirm the approach, it creates an implementation plan using tasks, updates the PBI with any new findings, and moves it to Work In Progress on the board.
Implement — pulls latest from master, creates a feature branch, implements the changes, and runs tests (ensuring >80% coverage).
PR submission — commits, pushes, and creates a PR with AB#<PBI_ID> in the body and a test plan checklist.
CI and bot review iteration — this is where the platform plugin skills kick in. After the PR is up:
- Polls CI checks until they pass. If a pipeline fails, uses the platform plugin’s pipeline monitoring to pull build logs, identify the failure, fix the issue, and re-trigger — all in the same session.
- Waits for Copilot review, then fetches its inline comments.
- Waits for
github-code-quality[bot]review if active on the repo. - For each bot comment: reads the code being referenced, determines if the concern is real, either fixes it or replies with a specific technical explanation of why it doesn’t apply.
- Resolves bot review threads after replying (never resolves without replying first, never resolves human reviewer threads).
Step 4: Review — /review-pr
/review-pr CVNA-Wholesale/platform-portal-api#42A thorough, skeptical code review that uses GitHub’s formal review mechanism:
- Reads every changed file in full — not just the diff, but the whole file for context. Traces call sites, checks implementations.
- Platform convention checks — pipeline names end in
-rc,YamlPipelineValidator@1is the first task, production SPN is correct,catalog-info.yamlis present, no hardcoded secrets. - Blast radius analysis — queries the platform graph for every PR to assess cross-repo impact. For shared repos (yaml-templates, azure-arm-templates), runs file-level impact analysis on each changed file.
- API change verification — if endpoints changed, tests them against the test environment using Entra tokens. Checks for breaking changes to request/response contracts.
- UI change verification — if frontend code changed, uses Playwright to test the Cloudflare Pages deploy preview. Takes screenshots, checks console errors, tests interactions.
- Deep code review — correctness, security, performance, maintainability, and test coverage. Each finding is tagged as Blocking, Warning, or Info.
- Submits a formal review — APPROVE, REQUEST_CHANGES, or COMMENT with line-level comments and a structured summary including blast radius, convention compliance, and a clear verdict.
Supporting skills
/slack-pr-watcher— monitors the team’s Slack channel for PR review requests and surfaces them. Useful when you want to stay on top of incoming reviews without checking Slack./catalog-yaml-templates— inventories yaml-templates usage across consumer repos. Useful when planning changes to shared pipeline templates.
Project-local agents
Three agents in .claude/agents/ complement the skills for cross-repo work:
- cross-repo-planner — when a PBI spans multiple repos, this agent queries the platform graph for dependencies and generates a sequenced execution plan. Spawned as a subagent during
/work-pbiwhen cross-repo coordination is needed. - dependency-auditor — audits a repo’s dependency tree for staleness, circular dependencies, and orphaned references. Useful as a pre-flight check before major changes.
- repo-scout — lightweight reconnaissance agent for quickly profiling an unfamiliar repo. Spawned when
/work-pbilands in a repo the user hasn’t worked in before.
Where the platform plugin fits in
The project-local skills handle the PBI lifecycle — create, refine, implement, review. The platform plugin (/plugin install platform) provides the infrastructure toolkit that complements this workflow:
- Pipeline iteration —
/run-ado-pipelineto trigger builds, monitor status, pull logs, and iterate on failures. This is the skill that gets the most use during the “CI iteration” phase of/work-pbi. - Pipeline comparison —
/compare-ado-pipelinesto diff environments (test vs UAT vs prod) when a pipeline works in one environment but fails in another. - Database diagnostics —
/diagnose-sql,/diagnose-pg,/sql-query-tuner,/pg-query-tuner, etc. for investigating performance issues without leaving the terminal. - ACA operations —
/diagnose-acafor container startup failures, plus the full ACA migration and scaffolding orchestrators for bigger infrastructure work. - Observability —
/query-app-insightsfor running KQL queries against Application Insights during incident investigation. - Auth0 —
/auth0-setupfor registering new applications or APIs.
Notes
- All skills use
az boardsCLI for ADO operations andghCLI for GitHub operations — make sure both are authenticated before starting. - The platform graph MCP is connected via
.mcp.jsonin the repo. If the graph tools aren’t responding, check that the MCP is connected via/mcp. - The
/review-prskill defaults to REQUEST_CHANGES when uncertain — it’s designed to be skeptical. It will never resolve human reviewer threads, only bot threads. - Every PBI must be parented under a Feature. The
ADO Work Item ValidationPR check will reject PRs linked to unparented PBIs. - The skills handle board column mapping automatically — the Platform Team board has columns (Design, Delayed, Resolved) that require a combination of
System.Stateand a WEF Kanban field update, not just a state change.