Adesa workflow
Plans features, decomposes them into stories, syncs to Azure DevOps, picks up work across every affected repo, implements with verification, and opens PRs.
The Adesa workflow removes the swivel-chair between Azure DevOps, multiple repo checkouts, and PR creation. You start with a feature idea and end with linked PRs across every repo that needed to change — without manually tracking which repo owns what or which story is in flight where.
When to reach for it
- A feature spans more than one repo (API + frontend, contracts + consumers, etc.).
- You want acceptance criteria captured in ADO before code lands.
- You’d rather Claude coordinate the cross-repo plumbing than do it by hand.
Getting set up
The workflow lives in a per-team planning repo (Wholesale-{TEAM}) that holds the registry of code repos, the generated context for each, and the planning artifacts — features, stories, specs, handoffs. You run every command from that repo’s working directory; the skills resolve absolute paths to your code-repo clones from there.
First-time bootstrap is three steps:
/plugin install adesa-workflow --repo— installs the plugin into the planning repo’s.claude/so anyone who clones it picks up the same toolset./aw-repos— pulls your team’s repo list from the centralOwnershiprepo, scans your machine for clones, and writesrepos.json(per-user, gitignored — paths are absolute)./aw-context --all— analyzes every code repo and generates.repo-context/{repo}/{overview,tech-stack,conventions,api-contracts,build-and-test,schema}.md. Commit these once and the rest of the team is set up by step 2 alone.
After that, /aw is the single entry point — it checks setup, offers to plan a feature, implement a story, or address PR feedback, and if a previous session left a /aw-handoff file it offers to resume.
Key commands
/aw— main entry point. Routes you into planning or implementation depending on what you say next./aw-breakdown— decompose a feature into phased stories with implementation specs./aw-sync— push the planned stories and tasks into ADO as work items./aw-pickup— pull a story from ADO, resolve which repos it touches, and create feature branches./aw-preflight— blocking validation of contracts, wiring, and configuration before code is written. A HOLD verdict prevents/aw-executefrom running until the impl spec is fixed./aw-execute— implement the per-repo specs across repos with verification and atomic commits./aw-review— multi-perspective code review across all affected repos before opening PRs./aw-verify— validate the implementation against the ADO acceptance criteria with evidence./aw-pr— open PRs in each affected repo, linked back to the work item./aw-feedback— pull PR review comments from all affected repos and address them./aw-reflect— capture lessons learned that feed back into the playbooks and future breakdowns.
Supporting commands
/aw-discover— interrogate a feature: research code patterns, query schemas, gather requirements./aw-coordinate— surface cross-story dependencies, ordering constraints, and integration risks./aw-feature-review— cross-story consistency, coverage, and architecture review for multi-story features./aw-status— feature progress dashboard from ADO./aw-handoff— save session state for resume in a new conversation./aw {work-item-id}in the next session detects the handoff file and offers to pick up where you left off./aw-context— bootstrap or refresh repo context. Use--allfor every repo in the registry; otherwise it incrementally updates the ones you point it at./aw-repos— set up or refresh the team repo registry.
Playbooks: how team patterns stay consistent
The workflow consults a set of named team patterns during planning, implementation, and review so that conventions don’t depend on whoever happens to be reviewing the PR. They live in the plugin’s guides/playbooks/ and ship with each version. Two tiers:
Rules — enforced. /aw-preflight blocks if the implementation spec doesn’t account for them. /aw-execute self-checks after writing. /aw-review flags violations on PRs. Reserved for invariants that are mechanically checkable (grep, JSON parse, attribute parse).
Recipes — advisory. /aw-discover and /aw-breakdown surface them as “here’s how the team usually does X.” No blocking. Used for recommended approaches where exceptions are reasonable.
Each playbook declares its trigger keywords in INDEX.md, so discover and breakdown can grep-match relevant playbooks against a feature description without loading every file.
Current rules
- config-value-plumbing —
appsettings.json,local.settings.json, deploy params, trigger%Token%references,Bind/GetSection/GetConnectionStringcalls. Two variants — App Service Functions (:separator) and ACA (__separator) — because the host resolves config differently. - infrastructure-dependencies — any new Cosmos / Service Bus / Storage / Key Vault binding. Branches behavior on YAML vs Classic release pipelines: YAML repos auto-fix in
cicd/, Classic repos BLOCK with a manual release-update checklist (the release definition isn’t in the repo, so a human has to touch it). - release-dependency-check — typed clients, dep checks, and post-deploy readiness probes. Triggered by Cosmos/Service Bus/Storage/HTTP clients,
AddHttpClient,AddHealthChecks, slot-swap readiness gates, Newman/Postman smoke tests. - servicebus-trigger-setup —
[ServiceBusTrigger]on isolated-worker Functions. Covers typed-constConnection, sessions opt-in, host.json tuning. - cosmos-changefeed-trigger-setup —
[CosmosDBTrigger]on isolated-worker Functions. Covers lease container in Bicep, idempotency declaration, container-name token vs constant. - timer-trigger-setup —
[TimerTrigger]. Covers schedule via token,RunOnStartupguard, monitor lease. - http-trigger-setup —
[HttpTrigger]. Covers the team’s Anonymous + JWT-middleware +[Authorize]auth model, route conventions, exception → status mapping.
Current recipes
- project-structure — scaffolding a new .NET repo, adding a new layer (Domain / Infrastructure / Application / Host), or evaluating a structural refactor. Covers the layered DAG, per-layer folder shape, test-project parity, naming conventions, and Aspire conventions for ACA repos.
A playbook gets added when /aw-reflect surfaces the same friction twice, when a PR comment has been left on three or more PRs, or when a platform constraint has bitten the team in production. One-off code style preferences belong in .editorconfig, not here.
What lands in ADO: one PBI, one task per repo
The breakdown step produces a single PBI with a child task for each affected repo. The split is intentional — the PBI is the human view, the tasks are the machine view.
The PBI is written so a PM, designer, or stakeholder can read it cold and understand what’s shipping and why. Plain language, business framing, acceptance criteria phrased as user-visible behavior — not “add a channelId column to the InventoryItem table.” It’s the artifact you’d paste into a release note or hand to QA.
Each task carries the prescriptive implementation spec for exactly one repo: which files to touch, which contracts to honor, which tests to add, which migration to write, what existing patterns to mirror. The task is detailed enough that /aw-pickup followed by /aw-execute can implement against it without re-asking what “the change” means in that repo. When acceptance criteria fall on a single repo, the spec includes the exact verification steps /aw-verify will run.
This gives you two readers without compromising either: stakeholders read the PBI and never see GraphQL field names or queue topics; engineering (and Claude) read the tasks and get the precision needed to implement without guessing. When the work changes shape mid-flight, the affected task gets updated — the PBI stays stable as the contract with the rest of the org.
See it in ADO
A real, in-flight feature: the IMS Admin Support Triage Workstation (Feature #179921). It replaces a workflow where support engineers monitor Slack alerts, investigate across 10+ systems, and open PRs for USN fixes — with a web app inside IMS that surfaces the alerts and runs the fixes inline. The architecture won’t fit in a single repo:
┌──────────────────────────────────────────────┐│ New: adesa-inventory-admin (React + BFF) │└──────────┬───────────────────────────────────┘ │ M2M ┌───────▼──────────┬──────────────────────────┐ ▼ ▼ ▼adesa-inventory- adesa-vtram- adesa-digital-manager (IMS) integration inventory-mgr (DIMS)/aw-breakdown decomposed it into 11 PBIs — each scoped to one coherent story, each with one or more child tasks attached for the repos it touches. The per-repo specs live alongside as *.impl.md files in the planning branch, and /aw-sync is what materialized them as the PBI + tasks you see in ADO.
Drill into one PBI — #182376: P2-07 — vTram Failures Publish Investigations
Make
adesa-vtram-integrationpublishInvestigationMessageto theinventory-investigations-topicService Bus topic whenever any V1, V2, or DIMS top-level orchestration fails, so vTram-side stuck vehicles show up in the support triage workstation alongside IMS and DIMS-from-IMS failures.
The acceptance criteria stay at user-visible behavior — “when any of the 12 V2 + 10 V1 + 1 DIMS top-level orchestrations invokes its existing failure-logging activity, exactly one InvestigationMessage is sent to inventory-investigations-topic,” “the consumer in adesa-inventory-admin receives the message and persists an InvestigationDocument with OperationType matching the vTram value (NOT Unknown),” “no publisher invocation occurs for sub-orchestration failures.”
This PBI touches two repos. The breakdown produced two child tasks:
Task #182483 — adesa-inventory-admin: Extend InvestigationOperationType with vTram values
Add 24 new enum values (13 V2 + 10 V1 + 1 DIMS) to the consumer enum without changing existing ordinals. Schema-compatible extension only — no new config keys, no infra changes. Unit tests assert ordinals are preserved and
Enum.TryParseround-trips each new value.
Task #182484 — adesa-vtram-integration: Migrate OrchestrationFailures to Cosmos and publish via change feed
The heavy lift on the publisher side: new Cosmos container
OrchestrationFailures(partition keyvehicle/{vin}, 6-month TTL), newOrchestrationFailureChangeFeedListenerFunction, newInvestigationServiceBusPublisher, refactoredIOrchestrationFailureServicewith a LaunchDarkly-gated dual-read path for cutover, plus the Bicep container resource and the five plain ARM variables that resolve the foreign Service Bus namespace’s connection. The 21+ orchestration files are explicitly NOT touched — they already invoke the failure-logging activity, and that’s the only hook needed.
Both tasks are scoped, but they aren’t independent — the PBI’s “Cross-Repository Integration Requirements” section spells out the deployment order: the admin enum must ship first, otherwise vTram failures land as Unknown on the consumer side and lose triage granularity. /aw-coordinate surfaced that constraint during planning so the PBI carries it as a requirement rather than as tribal knowledge that lives in someone’s head.
Full per-repo specs live in the planning repo under features/179921/stories/ — each story has a <id>-<slug>.md (the PBI body) and a <id>-<slug>.impl.md (the per-repo task specs that get sync’d as task descriptions).
For a smaller single-repo example, see #182464: P01-03 Expire Site-Level Catch-All Pricing Rules and its child task #182465.
The two artifacts that make it work
The workflow leans on two generated docs in the Wholesale-Avengers planning repo. Both are synthesized from per-repo .repo-context/{repo}/{schema,build-and-test,tech-stack,dependencies,conventions}.md files that the workflow generates and refreshes incrementally — so the picture stays current as repos change.
dependency-map.md — who calls whom
A flat table of every service-to-service edge across all 43 repos: from-repo, to-repo, protocol (HTTPS/REST, GraphQL, Service Bus, Storage Queue, Kafka, OAuth, etc.), and the specific path or topic. Plus a “Shared Data Stores” section grouping which repos touch which database, blob container, or table — including the exact SDK version each repo is on.
When /aw-discover runs against a feature, it walks this map to answer: if I change repo A, who breaks? Which auth flows are involved? Which shared databases do downstream consumers read from? /aw-coordinate uses the same map to derive ordering constraints (publisher must ship before consumer, schema migration must precede code that reads the new column, etc.) so the breakdown produces stories in an order that can actually merge.
pattern-index.md — how each repo does things
A repo-by-repo cross-reference of architectural choices: data store SDK and version, pipeline pattern (trunk vs git-flow vs classic), build agent, hosting target (App Service vs ACA vs Functions), Functions worker model (in-process vs isolated), messaging producers/consumers per topic, framework versions, and convention notes per repo.
When /aw-pickup lands you in a repo and /aw-execute starts implementing, the spec already knows: this repo uses Cosmos SDK 3.46.1 with a custom Newtonsoft serializer, builds on windows-2025, follows git-flow with develop/release/hotfix, deploys via run-from-package, and produces to the inventory-investigations-topic queue. New code matches existing patterns instead of inventing a fourth way to connect to the database in a codebase that already had three.
Why this combo matters
Without the map, you implement a change in one repo and find out at PR time that two consumers downstream are now broken. Without the pattern index, you implement a change in a repo and find out at code review that the team uses a different ORM, different test framework, and different deploy strategy than you assumed. Together they let the workflow plan and execute against real cross-repo state, not against guesses.