← Learning paths
claude/intermediate

Built-in agents: general, Explore, Plan

Autonomous specialists Claude can delegate to. How agents are defined, how they differ from skills, and when to reach for one.

📝 Wholesale AI Champions ⏱ 6 min read 📚 Building agents & skills

Agents are autonomous specialists that Claude can delegate to. While skills define what to do (procedural steps), agents define who does it — expert personas that handle tasks independently and return results.


How agents work

When Claude encounters a task that matches an agent’s description, it spawns that agent as a subprocess. The agent gets its own context window, operates autonomously using the tools it’s been granted, and returns a result when it’s done. The parent conversation continues once the agent reports back.

Agents are defined as markdown files in a plugin’s agents/ directory. Claude discovers them at runtime — any .md file in that directory becomes an available agent.


Agents vs skills

They look similar (both are markdown files), but they serve fundamentally different purposes:

DimensionSkillAgent
PurposeProcedural workflow — steps Claude followsExpert persona — autonomous delegate
Writing styleImperative: “Do X, then Y”Second person: “You are a security analyst…”
TriggeringUser invokes with /name or keyword matchClaude spawns via the Agent tool when needed
AutonomyClaude executes the steps directlyAgent operates independently, returns results
ScopeCan orchestrate multiple agentsFocused on one domain or task
ResourcesCan bundle references/, scripts/, assets/Self-contained in system prompt

Rule of thumb: If you’re defining what to do, write a skill. If you’re defining who does it, write an agent.

A skill might say “run a code review by spawning these reviewer agents, then synthesize their findings.” Each reviewer agent knows how to review code from its specific perspective (security, performance, clarity) but doesn’t know about the broader workflow.


Writing your first agent

An agent is a markdown file in your plugin’s agents/ directory. Here’s the anatomy:

Frontmatter

---
name: code-reviewer
description: >
Use this agent to review code for quality and security issues.
<example>
Context: User just finished implementing a feature
user: "review this code"
assistant: "I'll use the code-reviewer agent to analyze the implementation."
<commentary>Code was written, trigger review.</commentary>
</example>
model: inherit
color: blue
tools: ["Read", "Grep", "Glob"]
---
FieldWhat it does
nameIdentifier (lowercase, hyphens, 3-50 chars)
descriptionTriggering conditions with <example> blocks showing when to invoke
modelWhich model to use (inherit, sonnet, opus, haiku)
colorVisual identifier in logs (blue=analysis, green=generation, red=security)
toolsRestrict which tools the agent can use (principle of least privilege)

The <example> blocks in the description are critical. They teach Claude when to invoke the agent. Include 2-4 examples showing different triggering scenarios — both explicit requests (“review this code”) and contextual triggers (code was just written).

System prompt

After the frontmatter, write the agent’s instructions in second person:

You are a code review specialist focused on quality and security.
**Your Core Responsibilities:**
1. Analyze code changes for bugs, security issues, and anti-patterns
2. Check adherence to project conventions (reference CLAUDE.md)
3. Provide actionable feedback with file:line references
**Review Process:**
1. Read the diff to understand the scope of changes
2. Check each file for issues, categorized by severity
3. Verify no security vulnerabilities (OWASP top 10)
4. Confirm tests cover the new behavior
**Output Format:**
- Summary (1-2 sentences)
- Critical issues (must fix before merge)
- Important issues (should fix)
- Minor issues (nice to have)
- Positive observations (what was done well)

A good system prompt follows a consistent structure: who you are, what you’re responsible for, how you do the work, and what you return. Be specific — “check for SQL injection by examining queries for parameterization” is better than “check for issues.”


System prompt patterns

Different agent types follow different structures:

PatternWhen to useStructure
AnalysisCode review, security audit, documentation reviewGather context → scan → deep analysis → synthesize → prioritize → report
GenerationCode writing, test creation, doc authoringUnderstand requirements → gather patterns → design → generate → validate
ValidationPreflight checks, acceptance criteria, complianceLoad criteria → scan target → check rules → collect violations → verdict
OrchestrationMulti-step workflows, coordinationPlan → prepare → execute phases → monitor → verify → report

Agent scope and handoffs

Well-designed agents declare what they own and what they defer to others. This prevents agents from stepping on each other and enables intelligent routing.

## Scope
**Owns:**
- Security vulnerability analysis
- OWASP top 10 compliance checks
- Credential and secret detection
**Defers to:**
- Performance optimization → @efficiency-auditor
- Code style and readability → @clarity-checker
- Test coverage assessment → @coverage-analyst
## Handoff Triggers
- "If the issue is a performance concern" → @efficiency-auditor
- "If the finding requires architectural changes" → orchestrator
- "If the fix requires user input on trade-offs" → @human-liaison

This scope declaration serves two purposes: it keeps the agent focused on its domain, and it gives orchestrators enough information to route work correctly.


Examples from the marketplaces

Carvana marketplace — essentials plugin (14 agents)

AgentPatternPurpose
@code-reviewerAnalysisQuality, security, maintainability review
@debuggerAnalysisRoot cause analysis using 5 Whys and Fishbone diagrams
@backend-architectGenerationAPI design, microservices, database schemas
@codebase-locatorAnalysisFinds WHERE files and components live (never critiques)
@codebase-analyzerAnalysisExplains HOW code works (data flow, patterns)
@codebase-pattern-finderAnalysisFinds similar implementations and usage examples
8 design analyzersAnalysisEvaluate designs from different perspectives (simplicity, testability, robustness, etc.)

Wholesale marketplace — adesa-workflow plugin (7 agents)

AgentPatternPurpose
aw-discoveryAnalysisPer-repo code research (patterns, contracts, schemas)
aw-executorGenerationImplements spec in a repo, verifies, commits atomically
aw-preflight-checkerValidationChecks branch state, DI wiring, contracts, config
aw-verifierValidationPost-implementation acceptance criteria verification
aw-reviewerAnalysisMulti-perspective code review (8-9 specialized reviewers)
aw-spec-reviewerValidationCompares implementation against spec (blocking gate)
aw-net10-migratorGeneration.NET 10 migration with self-updating breaking changes KB

Notice the pattern: analysis agents are read-only (they observe and report), generation agents can write code, and validation agents produce pass/fail verdicts. Match the tool access to the pattern — analysis agents don’t need Write or Edit.


Common mistakes

MistakeWhy it’s a problemFix
No <example> blocks in descriptionClaude doesn’t know when to invoke the agentAdd 2-4 examples showing triggering scenarios
Vague system prompt (“check for issues”)Agent doesn’t know what to look forBe specific (“check for SQL injection by examining query parameterization”)
No output format definedAgent returns unstructured text, hard to synthesizeDefine exact sections and format
Too many responsibilitiesAgent tries to do everything, does nothing wellSplit into focused specialists
Agent marks its own work completeNo independent verificationHave the orchestrating skill verify results
Granting all toolsAgent can make changes it shouldn’tRestrict to minimum needed tools

Refining agents

The refine-prompt plugin in the Carvana marketplace can audit your agent definitions and suggest improvements:

/refine-prompt --agents
Scan agents/*.md
Score each agent (0-100):
✓ Clear triggering conditions
✓ Structured system prompt
⚠ Missing edge case handling
⚠ No output format defined
Suggest improvements → apply with approval

You can also iterate manually — agents are just markdown files, so ask Claude to refine them the same way you’d refine any prompt.

Want to skip writing agents by hand? The agent-generator plugin can analyze your project’s tech stack and generate a coordinated multi-agent system automatically. Run /init-agents to get started, then use /evolve-agents to refine them as your codebase changes.


Next steps

Once you’re comfortable writing individual agents, the next step is composing them into workflows. See LLM Orchestration for patterns on coordinating multiple agents — sequential execution, parallel swarms, multi-perspective analysis, and more.