Add three new sections to all 14 examples: - "Carry Forward": what output feeds into later examples (01-10) - "The Cumulative Path": alternative prompt building on previous output (02-10) - "Now Try It Yourself": personalized template with transferable pattern (all) - "Building On" callout connecting back to previous examples (02-10) Add Example 14: Build Your Personal Agent - capstone that guides reader through writing their own CLAUDE.md, creating a personal skill, connecting a messaging channel, setting up automation, and testing end-to-end. Update README with cumulative path diagram, two usage modes, and example 14. Update GETTING-STARTED.md with cross-references to relevant examples. 17 files changed, 703+ lines added. The examples now form a coherent learning path from "see what it can do" to "build your own agent." Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
118 lines
4.3 KiB
Markdown
118 lines
4.3 KiB
Markdown
# Example 06: Multi-Agent Orchestration
|
|
|
|
**Capability:** Claude Code can spawn sub-agents with distinct roles, run them in
|
|
parallel or in sequence, and combine their outputs into a final result.
|
|
|
|
**OpenClaw equivalent:** Sub-agents, agent-to-agent messaging, mesh workflows.
|
|
|
|
> **Building on Examples 01-05.** You have raw research (01), organized structure (02), verified data (03), and persistent state (05). Now three specialized agents turn that raw material into a polished, reviewed document. This is where the pipeline produces something you can actually share.
|
|
|
|
---
|
|
|
|
## Prerequisites
|
|
|
|
This example uses the agents defined in `.claude/agents/`:
|
|
- `researcher.md` - web research and source gathering
|
|
- `writer.md` - structured content drafting
|
|
- `reviewer.md` - accuracy and quality review
|
|
|
|
These agents load automatically when Claude Code opens this project.
|
|
|
|
---
|
|
|
|
## The Prompt
|
|
|
|
```
|
|
Use the researcher agent to find information about how Claude Code handles
|
|
agent isolation and worktree sandboxing (introduced in v2.1.49 and v2.1.50).
|
|
|
|
Then use the writer agent to draft a 300-word technical summary of the findings,
|
|
written for a developer audience. No jargon without explanation.
|
|
|
|
Finally, use the reviewer agent to check the draft for technical accuracy.
|
|
If the reviewer finds any issues, have the writer fix them before showing
|
|
me the final version.
|
|
```
|
|
|
|
---
|
|
|
|
## What Happens
|
|
|
|
Claude Code will:
|
|
|
|
1. Invoke the researcher agent via the Agent tool with the research task
|
|
2. Receive the research output and pass it to the writer agent
|
|
3. Invoke the writer agent to produce the 300-word draft
|
|
4. Invoke the reviewer agent with both the draft and source material
|
|
5. If the reviewer flags issues, loop back to the writer for a revision
|
|
6. Present the final reviewed draft
|
|
|
|
---
|
|
|
|
## Why This Matters
|
|
|
|
Agent Teams (v2.1.32) gives Claude Code a mesh model that matches OpenClaw's
|
|
sub-agent architecture. Worktree isolation (v2.1.50) means each agent gets
|
|
its own working directory, preventing file conflicts in parallel runs.
|
|
|
|
The researcher-writer-reviewer pattern is the same loop that produces articles
|
|
for `fromaitochitta.com`. The agents here are minimal versions of that pipeline.
|
|
|
|
---
|
|
|
|
## Carry Forward
|
|
|
|
You now have a multi-agent review loop. This is the quality engine:
|
|
|
|
- **Example 07** delivers the reviewed output to your phone
|
|
- **Example 10** runs this exact agent sequence as steps 2-4 of the full pipeline
|
|
- **Example 14** shows you how to customize these agents for your own work
|
|
|
|
The researcher-writer-reviewer pattern is the single most reusable workflow in this repo. Any task that involves gathering information, drafting content, and checking quality follows this shape.
|
|
|
|
---
|
|
|
|
## The Cumulative Path
|
|
|
|
> If you ran Examples 01-05, you have a research report with verified data
|
|
> and persistent state. This prompt runs the full agent review cycle on it.
|
|
|
|
```
|
|
Read all files in pipeline-output/research-report/.
|
|
|
|
Use the researcher agent to verify any claims that have not been
|
|
web-verified yet and fill any gaps in the data.
|
|
|
|
Then use the writer agent to produce a polished 400-word summary
|
|
suitable for sharing with a colleague. Clear language, no jargon,
|
|
structured with headings.
|
|
|
|
Finally, use the reviewer agent to check for accuracy, clarity,
|
|
and completeness. If the reviewer finds issues, have the writer
|
|
fix them before showing me the final version.
|
|
|
|
Save the final version to pipeline-output/research-report/final-summary.md
|
|
```
|
|
|
|
After running this, your research has been through a professional review cycle. The `final-summary.md` is something you could email to your team.
|
|
|
|
---
|
|
|
|
## Now Try It Yourself
|
|
|
|
Replace the demo topic with something you need researched and reviewed:
|
|
|
|
```
|
|
Use the researcher agent to find information about [your topic].
|
|
Then use the writer agent to draft a [length]-word [format] for
|
|
[your audience]. Use the reviewer agent to check [what matters
|
|
most: accuracy, tone, completeness]. Loop until the reviewer
|
|
approves.
|
|
```
|
|
|
|
**The pattern you just learned:** specialized agents + sequential handoff + revision loop. Break any complex task into roles (research, draft, review) and let agents handle each part.
|
|
|
|
Ideas worth trying:
|
|
- Research a vendor and produce a one-page recommendation memo
|
|
- Gather competitive intelligence and write an executive briefing
|
|
- Compile technical documentation and review it for accuracy
|