Build complete autonomous agent systems with Claude Code. 7-phase guided workflow: map work, CLAUDE.md, agent team, pipeline, security, deployment, test. Components: - commands/build.md: main guided workflow - agents/builder.md: scaffolding agent - skills/agent-system-design: architecture knowledge + 4 references - scripts/templates: hooks, automation, launchd, systemd Covers 22 OpenClaw capabilities across 4 deployment targets (local, Mac Mini, VPS, Managed Agents). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
357 lines
12 KiB
Markdown
357 lines
12 KiB
Markdown
# Pipeline Patterns Reference
|
|
|
|
Detailed patterns for designing multi-agent pipelines in Claude Code.
|
|
|
|
---
|
|
|
|
## The 3-agent pattern
|
|
|
|
The foundational pattern for autonomous content and analysis workflows.
|
|
|
|
**Roles:**
|
|
- **Researcher** — gathers inputs, structures knowledge, produces a brief
|
|
- **Writer** — produces primary output from the brief
|
|
- **Reviewer** — evaluates output against criteria, approves or requests revision
|
|
|
|
**When to use:** Any workflow where you need sourced input, generated output, and a quality gate. Content production, report generation, code review pipelines, competitive analysis.
|
|
|
|
**How to customize:**
|
|
|
|
| Domain | Researcher focus | Writer focus | Reviewer criteria |
|
|
|--------|-----------------|--------------|-------------------|
|
|
| Content | Web sources, existing articles, reader questions | Article draft matching voice and format | Accuracy, engagement, brand voice |
|
|
| Engineering | Codebase patterns, issue context, API docs | Implementation or PR description | Correctness, style, test coverage |
|
|
| Consulting | Client data, market research, precedents | Recommendation or slide content | Evidence quality, actionability |
|
|
| Operations | Logs, metrics, incident history | Incident report or runbook update | Completeness, clarity, ownership |
|
|
|
|
**Scaling the pattern:**
|
|
|
|
- Add a **Finalizer** agent between Reviewer and output for polish steps (SEO, formatting, compliance checks)
|
|
- Add a **Distributor** agent after output for routing (email, Slack, WordPress, Linear)
|
|
- Replace the single Researcher with a **parallel research team** (two agents gathering different source types simultaneously)
|
|
- Add a **Memory Manager** agent that reads and writes state files, keeping other agents focused on their domain
|
|
|
|
---
|
|
|
|
## The 9-step pipeline template
|
|
|
|
This is the canonical sequence for a full pipeline skill. Adapt as needed — not all steps are required for every workflow.
|
|
|
|
```
|
|
Step 1: Read project context
|
|
- Read CLAUDE.md and any project-specific config
|
|
- Establish constraints before any agent runs
|
|
|
|
Step 2: Read memory / previous state
|
|
- Load memory/MEMORY.md or data/run-state.json
|
|
- Pass relevant state to downstream agents as context
|
|
|
|
Step 3: Agent 1 — Researcher
|
|
- Invoke with: topic, constraints, memory context
|
|
- Output: structured research brief (markdown or JSON)
|
|
|
|
Step 4: Agent 2 — Writer
|
|
- Invoke with: research brief, output format spec, voice guidelines
|
|
- Output: primary draft
|
|
|
|
Step 5: Agent 3 — Reviewer
|
|
- Invoke with: draft, scoring rubric, acceptance criteria
|
|
- Output: score + pass/fail + specific revision requests
|
|
|
|
Step 6: Revision loop (conditional)
|
|
- If reviewer score < threshold: invoke Writer again with feedback
|
|
- Max 2 revision passes before escalating to human
|
|
- If max passes exceeded: save draft with NEEDS_REVIEW flag
|
|
|
|
Step 7: Save outputs
|
|
- Write final output to designated location
|
|
- Publish if automated publishing is configured
|
|
|
|
Step 8: Update memory
|
|
- Append run summary to memory file
|
|
- Update counters, timestamps, last-processed markers
|
|
|
|
Step 9: Confirm and report
|
|
- Print summary of what was produced
|
|
- List any items that need human attention
|
|
```
|
|
|
|
**Revision loop implementation note:** The loop should be explicit in the skill file. Do not rely on the agent to decide whether to loop — tell it exactly: "If the reviewer score is below 70, invoke the writer agent again with the reviewer's feedback. Do this at most twice."
|
|
|
|
---
|
|
|
|
## Agent role templates
|
|
|
|
Copy these as starting points. Replace bracketed values.
|
|
|
|
### Researcher
|
|
|
|
```markdown
|
|
---
|
|
name: researcher
|
|
description: |
|
|
Use this agent to gather and structure information before writing or analysis.
|
|
|
|
<example>
|
|
Context: Pipeline needs sourced input before writing
|
|
user: "Research [topic] for this week's report"
|
|
assistant: "I'll use the researcher agent to gather sources and produce a brief."
|
|
<commentary>
|
|
Research request before production triggers the researcher.
|
|
</commentary>
|
|
</example>
|
|
model: sonnet
|
|
tools: ["Read", "Glob", "Grep", "WebSearch", "Bash"]
|
|
---
|
|
|
|
## How you work
|
|
|
|
You produce research briefs, not finished content. Your output is always structured
|
|
for a downstream writer to consume.
|
|
|
|
1. Read any existing memory or prior research on this topic
|
|
2. Gather sources using available tools (web search, local files, MCP servers)
|
|
3. Extract the 5-7 most relevant facts, quotes, or data points
|
|
4. Note source reliability and any gaps in coverage
|
|
5. Produce a brief with sections: Background, Key Points, Sources, Gaps
|
|
|
|
## Rules
|
|
|
|
- Never fabricate sources or quotes
|
|
- Mark unverified claims explicitly
|
|
- Keep briefs under 800 words unless the topic demands more
|
|
- List every source URL or file path used
|
|
|
|
## Output format
|
|
|
|
```
|
|
## Research Brief: [Topic]
|
|
Date: [date]
|
|
|
|
### Background
|
|
[2-3 sentences of context]
|
|
|
|
### Key Points
|
|
- [point 1] (source: [url/file])
|
|
- [point 2] (source: [url/file])
|
|
...
|
|
|
|
### Sources
|
|
[list of all sources consulted]
|
|
|
|
### Gaps
|
|
[what could not be verified or found]
|
|
```
|
|
```
|
|
|
|
### Writer
|
|
|
|
```markdown
|
|
---
|
|
name: writer
|
|
description: |
|
|
Use this agent to produce primary written output from a research brief or spec.
|
|
|
|
<example>
|
|
Context: Research brief is ready, article needs to be written
|
|
user: "Write the article from this brief"
|
|
assistant: "I'll use the writer agent to draft from the research brief."
|
|
<commentary>
|
|
Production request with existing brief triggers the writer.
|
|
</commentary>
|
|
</example>
|
|
model: opus
|
|
tools: ["Read", "Write", "Glob"]
|
|
---
|
|
|
|
## How you work
|
|
|
|
You produce first drafts from structured inputs. You do not research — you write.
|
|
|
|
1. Read the research brief and any style/voice guidelines
|
|
2. Read examples of approved past output for voice calibration
|
|
3. Draft the primary output following the specified format
|
|
4. Do not add information not present in the brief
|
|
5. Flag any gaps where the brief was insufficient
|
|
|
|
## Rules
|
|
|
|
- Follow the voice and format guidelines exactly
|
|
- Never add claims not supported by the brief
|
|
- Keep within specified word count ±10%
|
|
- End with a concrete takeaway or call to action
|
|
|
|
## Output format
|
|
|
|
[Specify the exact output format for your domain]
|
|
```
|
|
|
|
### Reviewer
|
|
|
|
```markdown
|
|
---
|
|
name: reviewer
|
|
description: |
|
|
Use this agent to evaluate output quality and approve or request revisions.
|
|
|
|
<example>
|
|
Context: Draft is ready for quality check
|
|
user: "Review this draft before publishing"
|
|
assistant: "I'll use the reviewer agent to score and evaluate the draft."
|
|
<commentary>
|
|
Quality evaluation request triggers the reviewer.
|
|
</commentary>
|
|
</example>
|
|
model: opus
|
|
tools: ["Read"]
|
|
---
|
|
|
|
## How you work
|
|
|
|
You evaluate drafts against defined criteria and produce a scored assessment.
|
|
|
|
1. Read the draft and the original brief or requirements
|
|
2. Score against each dimension in the rubric (see Output format)
|
|
3. Note specific issues with line references where possible
|
|
4. Produce a pass/fail decision with justification
|
|
|
|
## Rules
|
|
|
|
- Score honestly — do not inflate to avoid revision cycles
|
|
- Be specific: "paragraph 3 is vague" not "needs more detail"
|
|
- Pass threshold is 70/100 overall with no dimension below 50
|
|
|
|
## Output format
|
|
|
|
```
|
|
## Review: [Draft title]
|
|
|
|
### Scores
|
|
- Accuracy: [0-25] — [one sentence justification]
|
|
- Clarity: [0-25] — [one sentence justification]
|
|
- Completeness: [0-25] — [one sentence justification]
|
|
- Format/Voice: [0-25] — [one sentence justification]
|
|
|
|
### Overall: [total]/100
|
|
|
|
### Decision: PASS | REVISE | REJECT
|
|
|
|
### Revision requests (if REVISE)
|
|
1. [specific request]
|
|
2. [specific request]
|
|
```
|
|
```
|
|
|
|
---
|
|
|
|
## Quality gates: 4-level scoring rubric
|
|
|
|
Use this rubric in reviewer agents and pipeline acceptance criteria.
|
|
|
|
| Dimension | 0-12 (Poor) | 13-18 (Acceptable) | 19-22 (Good) | 23-25 (Excellent) |
|
|
|-----------|-------------|-------------------|--------------|-------------------|
|
|
| **Accuracy** | Multiple errors or unsupported claims | Minor errors, mostly supported | All claims verifiable | Fully sourced, no errors |
|
|
| **Clarity** | Hard to follow, jargon-heavy | Mostly clear, some confusion | Clear throughout | Immediately clear, no ambiguity |
|
|
| **Completeness** | Major gaps, incomplete | Covers main points, some gaps | Thorough coverage | Nothing missing |
|
|
| **Format/Voice** | Wrong format or tone | Mostly correct, minor deviations | Correct format and tone | Perfect fit for context |
|
|
|
|
**Thresholds:**
|
|
- 90-100: Publish immediately
|
|
- 70-89: Publish with minor edits
|
|
- 50-69: Revise and re-review
|
|
- Below 50: Reject, start over or escalate to human
|
|
|
|
---
|
|
|
|
## Pipeline skill format
|
|
|
|
Pipeline skills live in `.claude/skills/<name>/SKILL.md`. They are invoked as `/plugin:skill-name` or triggered by the agent system automatically.
|
|
|
|
```markdown
|
|
---
|
|
name: weekly-report
|
|
description: |
|
|
Run the weekly report pipeline. Triggers on: "run weekly report",
|
|
"generate this week's report", "weekly pipeline"
|
|
version: 0.1.0
|
|
---
|
|
|
|
## Weekly Report Pipeline
|
|
|
|
Run these steps in order. Do not skip steps. If a step fails, stop and report the error.
|
|
|
|
### Step 1: Load context
|
|
Read `CLAUDE.md` and `memory/MEMORY.md`. Note the last run date and any pending items.
|
|
|
|
### Step 2: Research
|
|
Use the Agent tool to invoke the `researcher` agent with this prompt:
|
|
"Research [topic] for the period [date range]. Focus on [specific angle]."
|
|
Save the research brief to `data/research-brief-[date].md`.
|
|
|
|
### Step 3: Write
|
|
Use the Agent tool to invoke the `writer` agent with this prompt:
|
|
"Write the weekly report from [path to brief]. Follow the format in [style guide path]."
|
|
Save the draft to `drafts/weekly-[date].md`.
|
|
|
|
### Step 4: Review
|
|
Use the Agent tool to invoke the `reviewer` agent with this prompt:
|
|
"Review the draft at [path]. Use the standard 4-dimension rubric."
|
|
|
|
### Step 5: Handle review result
|
|
- If score >= 70: proceed to Step 6
|
|
- If score < 70 and revision count < 2: invoke writer again with reviewer feedback, then re-review
|
|
- If score < 70 after 2 revisions: save draft with NEEDS_REVIEW flag, skip to Step 7
|
|
|
|
### Step 6: Finalize
|
|
[Publishing or distribution steps]
|
|
|
|
### Step 7: Update memory
|
|
Append to `memory/MEMORY.md`:
|
|
- Date of run
|
|
- Output file path
|
|
- Review score
|
|
- Any items needing human attention
|
|
```
|
|
|
|
---
|
|
|
|
## Agent frontmatter: all valid fields
|
|
|
|
```yaml
|
|
name: <string> # required — slug, used for routing and invocation
|
|
description: | # required — trigger text + examples
|
|
<string>
|
|
model: sonnet|opus # required — model for this agent's runs
|
|
tools: [<string>, ...] # required — explicit tool allowlist
|
|
color: <string> # optional — UI color hint (green, blue, red, yellow, purple)
|
|
```
|
|
|
|
Tools available for agents: `Read`, `Write`, `Edit`, `Glob`, `Grep`, `Bash`, `WebSearch`, `WebFetch`, `Agent`, `AskUserQuestion`, and any MCP tool by its full name (e.g., `mcp__tavily__tavily_search`).
|
|
|
|
---
|
|
|
|
## How agents communicate
|
|
|
|
**Agent tool (sequential):** The orchestrating skill or parent agent uses the `Agent` tool to invoke a subagent. The subagent runs to completion and returns its output. This is the standard pattern for pipeline steps.
|
|
|
|
```
|
|
Agent tool call:
|
|
agent: researcher
|
|
prompt: "Research X and produce a brief in the format..."
|
|
→ researcher runs, returns brief text
|
|
→ parent continues with Step 2
|
|
```
|
|
|
|
**SendMessage (async / worktree):** For parallel execution, agents can be spawned in separate worktrees. Each worktree runs independently; results are assembled by the orchestrator after all complete. Use this when steps have no dependencies on each other (e.g., researching two topics simultaneously).
|
|
|
|
**Worktree isolation:** When an agent runs in a worktree, it has its own working copy of the repository. It cannot see changes made by other agents running simultaneously. Use a shared output directory (outside the worktrees) or a coordination file to merge results.
|
|
|
|
**File-based handoff (simple and reliable):** The most robust communication pattern is file-based. Each agent writes its output to a designated path; the next agent reads from that path. This works in any execution mode and produces an audit trail of intermediate outputs.
|
|
|
|
```
|
|
researcher → data/brief-2026-04-10.md
|
|
writer → reads data/brief-2026-04-10.md → drafts/article-2026-04-10.md
|
|
reviewer → reads drafts/article-2026-04-10.md → data/review-2026-04-10.md
|
|
```
|
|
|
|
For most personal and small-team pipelines, sequential execution with file-based handoff is the right choice. It is simpler to debug, easier to resume after failure, and produces a clear audit trail.
|