Wholesale AI Champions Claude Code playbook
← Workflows
Planning + end-to-end PBI delivery

Eagles workflow

Decomposes features into interrogated stories with per-repo task specs, syncs them to Azure DevOps, then delivers each PBI end-to-end — agent team, PR, pipeline verification, testing checklists, retrospective, and process audit.

The Eagles workflow covers the full lifecycle of a feature: from a raw idea or brief through interrogated story specs and into automated delivery. The upstream half lives in the Eagles-story-planning repo and produces Azure DevOps PBIs with per-repo task specs detailed enough that a coding agent can implement without asking questions. The downstream half — /deliver-pbi — picks up a PBI and orchestrates an agent team through a gated pipeline that ends with a merged PR, testing checklists, and a process retrospective. Planning is the hard part; delivery is mechanical by design.

When to reach for it

  • You have a feature idea (a brief, PRD, meeting transcript, or ADO feature) and want to turn it into implementation-ready stories before code starts.
  • A feature spans more than one repo and you’d rather plan the cross-repo coordination than discover it at PR time.
  • You want acceptance criteria, technical specs, and per-repo task breakdowns captured before anyone writes a line of code.
  • You have a PBI ready to implement and want to go from spec to PR without manually coordinating research, implementation, testing, review, and documentation.

How the planning repo is organized

The workflow lives in Eagles-story-planning. The relevant pieces:

  • planning/CLAUDE.md — the canonical guide every agent reads. Defines the interrogation methodology, story/task templates, quality standards, and the 14-category review checklist.
  • planning/.agent-rules/ — 25 modular rule files organized by domain: core process (interrogation protocol, two-tier structure, quality standards), technical patterns (database standards, C# conventions, feature toggles, pattern discovery, timer functions), workflows (story creation, ADO integration, agent coordination), and reference materials (CSO message registry).
  • planning/repos/ — the multi-repo registry. repos.json catalogs all 91 Eagles repositories across 13 categories (PowerBuilder, .NET API, database schema, CSO messaging, transmission, Azure Functions, and more). local-config.json (gitignored) maps repo names to local clone paths.
  • planning/features/{id}-{name}/ — one folder per feature with the feature overview, story files, and working notes.
  • planning/.templates/ — templates for features, stories, and tasks.
  • .claude/agents/ — 13 specialized agents for decomposition, discovery, review, implementation, and ADO sync.
  • .claude/skills/ — skills for each phase: discover-feature, discover-story, implement-team, ado-sync, process-feedback, code-review, repo-discovery, yaml-pipeline, and more.

The planning lifecycle

1. Feature decomposition

/discover-feature takes a feature brief, PRD, or transcript and produces a phased story breakdown. It builds a knowledge base of related prior work, separates confirmed scope from conditional scope, identifies what’s unknown, and produces a shape-of-work with a rough story count range and risk assessment. Output is a feature overview file plus a stories folder with one file per story.

2. The interrogatory loop

Each story goes through a four-stage cycle that repeats until no knowledge gaps remain:

Discover. Search the codebase before asking anyone anything. Scan planning/features/ for similar stories. Grep repos for technical patterns, following the pattern-discovery rules (find 3-5 examples, extract the common convention, validate consistency). Query databases (AMS, TranAMS, CorpRes, AuctionAccess) via the SQL MCP for schema details. Build a list of code references with exact file paths and line numbers.

Identify knowledge gaps. Structured analysis across seven categories: implementation choices (multiple valid options exist), missing requirements (no toggle spec, no config, no error messages), technical details (need exact property names, types, values), pattern discovery (don’t know which convention to follow), database schema (need exact column definitions), preservation risks (existing behavior that could break), and cross-cutting concerns (logging, auth, backwards compatibility).

Interrogate. Five-stage question progression — open-ended discovery (what and why), product context integration (how this connects to existing systems), scope and constraints (toggles, config, boundaries, choices), technical requirements (exact specs), and validation (verify understanding). The rule: if “OR” conditions remain anywhere, you’re not done interrogating.

Integrate. Document findings, check for contradictions, identify new gaps that emerged from the answers. If new gaps exist, loop back. If all gaps are closed, proceed to story writing.

3. Story and task authoring

Stories and tasks are a two-tier structure written for different audiences:

Stories are behavioral and human-focused: what is changing, why, acceptance criteria phrased as user-visible behavior, feature toggle requirements, configuration overview, cross-repo integration needs. A PM or stakeholder can read a story cold and understand what’s shipping.

Tasks are technical and AI-focused: one per repository, self-contained, with the story objective and applicable ACs duplicated (coding agents only see their task, not the parent story). Each task carries exact file paths and line numbers for modification targets, pattern references to existing code to follow, feature toggle implementation details (key, provider, usage pattern, context, default), configuration details (exact keys, types, defaults, locations), test specifications (Given/When/Then with exact test data), and a validation plan with goalposts and failure diagnostics.

Information duplication between story and task is intentional — it’s the price of self-contained task specs that don’t require the agent to cross-reference the parent.

4. Validation planning

Every task carries a validation section built on a four-tier model: Tier 1 (unit/integration tests via dotnet test), Tier 2 (system tests against real infrastructure), Tier 3 (structural checks — file existence, config keys, DI registration), and Tier 4 (deployed system verification — DB queries, App Insights, API calls). Each tier specifies goalposts, a verification approach, and failure diagnostics. Even config-only changes get at least a Tier 3 structural check.

5. Final review and ADO sync

Before anything lands in Azure DevOps, the story passes a 14-category checklist: format and references, ambiguity elimination, completeness (toggles, config, error scenarios, all test types), precision (exact names, values, types), internal consistency, code references, database schema validation, and more. If any items fail, the story goes back for fixes.

/ado-sync then pushes the stories as Product Backlog Items (not User Stories — the ADESA project requires Custom.FlowItemType = "Feature") with child Task work items, one per repo. Stories and tasks are committed atomically — story-task unity means any change to a story triggers task impact assessment.

Delivering: /deliver-pbi

Once a PBI is in ADO with interrogated specs, /deliver-pbi [ID] picks it up and runs it through an eight-phase gated pipeline. Each phase has a hard gate the orchestrator verifies before advancing.

Phase 0 — Setup. Fetch the PBI, verify parent Feature, clone repos, decompose ACs if needed, create a task DAG.

Phase 1 — Agent team. Launch three agents in parallel: Engineer (research → plan → implement → /simplify → Sonar sweep → telemetry scan), Quality Engineer (tests matching repo patterns), Product Manager (validate each AC with evidence). The PM routes rejections back for fix cycles. A telemetry assessment gate checks logging, error observability, and SP error capture.

Phase 2 — PR. Push the feature branch, open a PR linked to the ADO work item, wait for Copilot review.

Phase 3 — Monitor. Loop: fix PR check failures and Copilot comments, resolve via GraphQL, re-check. Circuit breaker after five cycles. Then trigger a build and dev deploy on the feature branch (auto-skipped for SQL/DB repos).

Phase 4 — Document. Generate QA and UAT testing checklists (pushed to Eagles-story-planning/Testing-Artifacts/{ID}/ and attached to the ADO work item). Write a process retrospective (pushed to Eagles-story-planning/RetroDocs/). Update the ADO work item. Shut down agents.

Phase 5 — Release pipeline. Offer /convert-release if no cicd/ directory exists.

Phase 6 — Audit. Independent Process Auditor runs an 11-point checklist (parent linked, checks pass, comments resolved, retro exists, checklists attached, etc.). Re-runs until ALL PASS.

Phase 7 — Report. Final summary: PR URL, check status, AC results, retrospective path, auditor result.

The agent team

  • Engineer — research, plan, implement on feature/{id}-{slug} (lowercase), /simplify, Sonar pre-emption (SQL and C#), telemetry gap scan.
  • Quality Engineer — tests matching repo patterns. For SQL: AMS harness conventions (spams_m_ErrorLog, dbo.TestIds, fn_AuctionGetDate). Reads every SP definition before calling it — never guesses parameter names.
  • Product Manager — validates each AC against implementation and tests. Names specific test cases or code locations as evidence.
  • Process Auditor — independent 11-point verification. No stake in the outcome.

Closing the loop

Every delivery writes a retrospective. /process-feedback analyzes retrospectives across deliveries, identifies recurring patterns, and proposes changes to rules, agents, and templates. Proposed changes get applied after manual acceptance. The planning system improves itself instead of accumulating tribal knowledge — friction that surfaces three times becomes a rule change, not a recurring conversation.

Why this works

The interrogatory loop is the multiplier. By the time a story reaches /deliver-pbi, every “OR” condition is resolved, every SP is named, every column type is specified, every test case has exact Given/When/Then data. The coding agents don’t need to ask questions because the spec already answered them. Planning is slow; delivery is fast. That’s the point.