Wholesale AI Champions Claude Code playbook
← Champions

Dan Robbins

Windows · C# / .NET · T-SQL · story planning · multi-repo orchestration

Most of my Claude time splits between two things: planning stories for the Eagles team and delivering PBIs across a 91-repo ecosystem that spans PowerBuilder, .NET APIs, Azure Functions, T-SQL databases, and Service Bus consumers. The planning side is where the leverage is — a thorough interrogatory loop before implementation means the coding agents downstream get self-contained task specs with zero ambiguity. By the time a story hits the sprint board, every “OR” condition is resolved, every stored procedure is named, and every column type is specified. The implementation is mostly mechanical after that.

Codebases

Where I spend my time, roughly in order:

  • C# / .NET — APIs (ASP.NET Core, MediatR, Dapper), Azure Functions (V4, net8.0), CSO consumers, and transmission processors. This is the bulk of the new work.
  • T-SQL / SQL Server — Stored procedures, schema migrations, and data validation across TranAMS, CorpRes, and AuctionAccess. I know the AMS SP naming conventions (spams_s_, spams_m_, spams_i_, spams_u_, spams_d_, spams_v_, spams_x_) cold.
  • PowerBuilder — Legacy AMS frontend (ASG-AMS-Suite, VLC-AMS-Suite). Still actively maintained, still in production. Modernization happens around it, not instead of it.
  • Azure DevOps Pipelines — YAML pipeline definitions for build, test, and deploy across the repo fleet.

MCPs

The ones I keep connected:

  • SQL Server (carvana-sql) — Query test/UAT database instances directly from Claude. Lets me verify schema, check column types, and validate SP behavior without leaving the terminal. This is the one I use the most.
  • IDE (ide) — VS Code integration for diagnostics and code execution. Useful for running quick checks without switching windows.
  • Azure DevOps — Work item creation, linking, and management. The story planning workflow creates PBIs and tasks through this.
  • Context7 — Current library docs so Claude isn’t guessing based on stale training data.
  • Google Drive — Reading specs, meeting notes, and reference materials that live in shared drives.

Plugins

Two installed, both from the org marketplaces:

  • ws-atlas (Wholesale marketplace) — The Wholesale Atlas knowledge base. Maps the entire Eagles ecosystem: 91 repos across 13 categories, AMS stored procedure inventories, CSO message schemas, trigger chains, App Insights components. When I need to find which repo owns a feature or trace a dependency chain, this is faster than grepping across 56 local clones.
  • essentials (Carvana marketplace) — Development workflow tooling. Code review, implementation planning, debugging, PR descriptions. The /essentials-analyze and /essentials-create-plan skills get the most use.

Skills I use the most

Project-local (Eagles-story-planning repo)

  • /discover-feature — Kicks off a full feature discovery. Searches codebases, queries DB schemas, identifies patterns, and produces a shape-of-work before anyone writes a line of code.
  • /discover-story — Takes a single story from the shape-of-work and produces the complete spec: behavioral acceptance criteria, per-repo task breakdowns with file paths, and a validation plan.
  • /implement-team — Orchestrates multi-repo implementation against task specs. Coordinates branch creation, coding agents, and PR workflows across repositories.
  • /process-feedback — Pulls session reflections from merged PRs and analyzes them against current rules, proposing harness improvements.
  • /ado-sync — Syncs story planning artifacts with Azure DevOps work items.
  • /yaml-pipeline — Generates Azure DevOps YAML pipeline definitions.

Plugin-provided

  • /deliver-pbi — Full PBI delivery lifecycle with phased execution: discovery, agent spawning, PR creation, monitoring, simplification, documentation, and audit. Ties everything together for end-to-end feature delivery.
  • /ws-atlas — Query the Wholesale Atlas for repo ownership, dependency maps, SP inventories, and system documentation.
  • /chrysler-migration — Chrysler transmission migration automation.
  • /test-http-trigger — End-to-end HTTP trigger testing.
  • /run-ado-pipeline — Create, find, and monitor ADO pipeline runs.

Workflow

Sessions usually start from the story planning repo, even when the eventual change is a one-line fix somewhere else.

  1. Discover. Gather reference material (briefs, ADO features, PRDs, meeting notes) into a working directory. Run /discover-feature to build a knowledge base and produce a shape-of-work.
  2. Interrogate. Iterate with Claude on gaps, ambiguities, and open questions. Five stages: open discovery, context integration, scope/constraints, technical details, validation. Don’t move past this until every “OR” is resolved.
  3. Spec. Run /discover-story for each story. Produces behavioral acceptance criteria, per-repo task breakdowns with exact file paths and implementation specs, and a validation plan with goalposts.
  4. Deliver. Developer picks up the story, runs /deliver-pbi or /implement-team. Coding agents execute against the task specs. PRs get created, reviewed, and merged.
  5. Close the loop. /process-feedback scans merged PRs for session reflections and proposes harness improvements — rule updates, agent prompt revisions, template tweaks.

Settings

  • Model: claude-opus-4-6. Planning quality matters more than token economy.
  • Permissions: skipDangerousModePermissionPrompt: true. Same reasoning as Patrick — the permission prompts become noise when you’re running 50+ queries per session.
  • Marketplaces: Both the Wholesale and Carvana marketplaces are configured.
  • Platform: Windows 11, bash shell, VS Code.

Why this works for me

  • Planning is the multiplier. A detailed story spec eliminates back-and-forth during implementation. The coding agents don’t need to ask questions because the spec already answered them.
  • The SQL MCP is the secret weapon. Half of story planning is verifying what already exists in the database — column types, SP signatures, trigger chains. Being able to query the actual schema from inside the conversation instead of alt-tabbing to SSMS saves enormous context-switching cost.
  • 91 repos is manageable with the right index. The ws-atlas plugin means I don’t have to remember which repo owns what. I query the atlas, get the answer, and move on.
  • Close-the-loop feedback compounds. Every sprint, the harness gets slightly better at producing specs that don’t need revision. The investment in /process-feedback pays off across the whole team, not just my sessions.