Patrick Elmore
Planning-first · multi-repo .NET · sessions improve the system
Most of my Claude time goes into two things: improving the planning repo and harness itself (agents, skills, rules, custom MCPs, templates), and using that harness to produce planning artifacts (feature breakdowns, story specs, validation plans, process retrospectives). The artifacts are the output of the harness; the harness is what makes them cheap to produce. By the time implementation starts, the spec leaves nothing to decide during the build, so the code work is mostly mechanical. I’m optimizing for code that ships without surprises in a multi-repo .NET landscape where the same kind of bug used to show up every other sprint. The rest of the page is the setup.
Settings
The whole ~/.claude/settings.json is built around two things: high effort, low ceremony.
- High effort:
xhighwithalwaysThinkingEnabled: true. Planning quality matters more than token economy on this work. - No ceremony:
yolo-claude. Aliased in~/.bash_aliasesasclaude --dangerously-skip-permissions. Every session starts with it. (The permission allowlist still insettings.jsonis a relic from before I made the switch.)Terminal window alias yolo-claude="claude --dangerously-skip-permissions" - Custom spinner verbs. Sample: “Code quality doesn’t matter anymore, move past it.” “Turning a 3 point story into a 21.” “Increasing readability with GOTO statements.”
- Plugins. Two installed (
competency-journal@wholesale-claude-code-marketplace,essentials@carvana-claude-code-marketplace). Most of my tooling lives in the planning repo as project-local skills, not plugins. Both the Wholesale and Carvana marketplaces are configured so I can pull from either when something useful ships.
MCPs
Three connected. Two I built for the team, one off-the-shelf. The custom ones live in the planning repo under .tools/, so the rest of the team picks them up by cloning.
db-sql-server(HTTP atlocalhost:9999): read-only SQL against test/uat instances of Nexus and AMS. Lets Claude verify schema before generating queries instead of guessing column names. README.roslyn(HTTP atlocalhost:9997): semantic C# code intelligence. Type structures, interface contracts, constructor dependencies, cross-project references. Critical when planning multi-repo changes against unfamiliar codebases. README.fetch(mcp-server-fetch): read web content into context.
I don’t run any of the shared org-wide MCPs; the three above are all I use.
Hooks & automation
Two hooks and a custom status line.
PreToolUse:block-root-repo-search.sh. Denies broad searches (find,rg,fd, recursivegrep, Glob/Grep) targeting/mnt/c/repos/. With 40+ repos cloned under that path, a top-level search times out. The deny message points the agent atplanning/repos/repos.jsonto identify the right repo and target it specifically.Stop: daily usage report. Copies~/.claude/usage-data/report.htmlinto/mnt/c/misc/insight-reports/report-{date}.htmlif the report was updated in the last 5 minutes. Historical record for tracking spend trends.- Status line. Dimmed
repo | branch | model | context warningwith pending-change counts.
Workflow
Every session starts in the planning repo, not a code repo. Even when the eventual change is a one-line code edit, the cycle begins with planning artifacts.
Sessions split into two tracks: producing planning artifacts for an upcoming feature, and improving the harness that produces them. The phases below cover the artifact track. The harness track happens in parallel, fed by Phase 3 (close the loop) and a steady stream of direct edits to agents, skills, rules, and templates.
1. Plan the feature
- Gather reference material. Drop everything I have about the feature into a working directory inside the feature folder: the brief, the ADO feature record, the PRD, meeting notes, flow diagrams, transcripts. The planning repo’s discovery skills read from this directory, so the more context lives there, the better the output.
- Run
/sith:discover-featureagainst the working directory. The skill builds a knowledge base, identifies open questions, and produces a “shape of work”: a high-level, conceptual list of the stories the feature will need. - Iterate with Claude on gaps and decisions. Clarifying ambiguity, resolving “OR” conditions, calling out missing requirements, choosing between approaches. Most of the value comes out of this back-and-forth. Don’t move past it until the open questions are closed.
- Verify the shape of work. Before fanning out into individual story discovery, sanity-check that the proposed stories cover the right scope at the right grain. If the shape is off, fix it here. Re-discovery later is expensive.
- Run
/sith:discover-storyfor each story in the shape. This produces the full story spec: behavioral acceptance criteria, per-repo task breakdowns with file paths and exact specs, validation plan with goalposts. - Verify each story for correctness. Format, internal consistency, accurate code references, complete task coverage.
- Review. Stories pass through the developer who owns planning for the feature, the team lead, and the product owner. Polish where needed.
- Rollout meeting. The story gets reviewed by the team. If it survives, it’s ready to be picked up.
2. Implement the story
The story gets pulled into the sprint and picked up by a developer, who runs /sith:implement-team to orchestrate per-repo implementation against the task specs. Validation results from the implementing agent get reviewed, with manual testing layered on if the validation plan calls for it. Before the commit lands, the session reflection file gets updated. Then it’s commit, push, open PR, merge.
3. Close the loop
Once the PR merges, the session reflection file lives in source control. /sith:rpiv-roundup scans recent merged PRs and pulls those reflection artifacts back into the planning repo. /sith:process-feedback then analyzes them against the current rules, agents, and templates, identifies recurring patterns, and proposes harness changes: rule updates, agent prompt revisions, template tweaks. Proposed changes get applied after manual acceptance. The system improves itself instead of accumulating tribal knowledge.
More detail at Sith planning.
Why this works for me
The reasoning behind the choices above:
- Code is the easy part. I spend disproportionate Claude time on planning because that’s where work compounds. A bad implementation of a good spec produces a fixable PR. A good implementation of a bad spec produces a feature nobody can describe.
- Tooling lives in the planning repo, not
~/.claude/. Custom MCPs and skills in~/.claude/are personal property; nobody else benefits. The same things in the planning repo get cloned by the team and improve as the team uses them. The repo is the deliverable, the personal config is the byproduct. - Reflections beat hand-curated rules. I used to update planning rules whenever I noticed something was wrong. Two sprints later, the pattern that produced the wrong output was gone from memory and the rule was orphaned. Friction that surfaces three times across sessions turns into a rule change, not a recurring reminder.
- Permission prompts are the wrong axis to spend attention on. If planning quality is what makes the rest cheap, paying for high-effort responses is worth it. If the same prompts fire 50 times per day, they stop being a safety mechanism and become noise. I’d rather review actual diffs than confirm each
Read.