agent-builder/.claude/plans/blueprints/session-2-skills-templates.md
Kjell Tore Guttormsen 1a776bdeb2 docs(plans): create session blueprints for Agent Factory execution
8 session blueprints covering all 27 steps across 3 waves:
- Session 1: Foundation (rename + commands, Steps 1-5)
- Session 2: Skills and templates (Steps 6-7)
- Session 3: OpenClaw patterns (memory/heartbeat/proactive/cron, Steps 9-12)
- Session 4: Paperclip patterns (context/goals/budget/governance/org-chart, Steps 14-18)
- Session 5: Self-learning (feedback/optimization, Steps 20-21)
- Session 6: Integration (Docker/transfer/5 more domains, Steps 22-24)
- Session 7: Skill updates (memory/autonomy/orchestration/governance/MCP refs, Steps 13,19,25)
- Session 8: Finalization (build command integration + v1.0, Steps 8,26,27)

Also updates plan assumptions table with verified findings.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 11:21:17 +02:00

32 KiB

Session 2: Skills and Initial Domain Templates

Steps 6, 7 | Wave 1 | Depends on: none

Dependencies

Entry condition: none (independent of Session 1)

Scope Fence

Touch:

  • skills/managed-agents/SKILL.md (new)
  • skills/managed-agents/references/api-patterns.md (new)
  • scripts/templates/domains/content-pipeline.md (new)
  • scripts/templates/domains/code-review.md (new)
  • scripts/templates/domains/monitoring.md (new)
  • scripts/templates/domains/research-synthesis.md (new)
  • scripts/templates/domains/data-processing.md (new)
  • scripts/templates/domains/README.md (new)

Never touch:

  • commands/
  • agents/
  • .claude-plugin/
  • CLAUDE.md
  • README.md
  • scripts/templates/memory/
  • scripts/templates/heartbeat/
  • Any existing file in skills/agent-system-design/

Step 6: Create managed-agents skill

Files to create

skills/managed-agents/SKILL.md:

---
name: managed-agents
description: |
  This skill should be used when the user asks about "managed agents",
  "Anthropic API agents", "cloud-hosted agents", "agent SDK",
  "deploying agents to the cloud", "serverless agents",
  "API-based agent deployment", "/v1/agents endpoint",
  "remote agent hosting", "agent as a service"
version: 0.1.0
---

## What are managed agents

Managed agents are Anthropic-hosted agent runtimes accessed via the Agent SDK
(`@anthropic-ai/sdk` for TypeScript, `anthropic` for Python). Instead of running
Claude Code locally, the agent runs on Anthropic's infrastructure with persistent
sessions, tool access, and automatic scaling.

Key difference from local agents: managed agents don't have local filesystem
access by default. They work through tools you define in code, not through
Claude Code's built-in Read/Write/Bash tools.

## When to use managed agents vs local deployment

| Dimension | Managed Agents (API) | Local (Claude Code CLI) |
|-----------|---------------------|------------------------|
| Infrastructure | Anthropic-hosted | Your machine/server |
| Filesystem | Via tools you define | Full local access |
| MCP servers | Not available | Full MCP support |
| Scaling | Automatic | Manual |
| Cost model | Per-token API billing | Subscription or API key |
| Best for | SaaS products, API integrations | Personal pipelines, file-heavy work |
| Session persistence | Via API sessions | Via `--resume` / `--name` |

**Decision rule:** If your agents need local filesystem access, MCP servers, or
run as part of a personal workflow → use local deployment (cron/launchd/systemd/Docker).
If your agents are part of a product, need to scale, or don't need local files →
use managed agents.

**Important limitation:** Managed agents cannot use MCP servers. If your agent
system relies on MCP servers for Slack, GitHub, databases, or other integrations,
use local deployment with Docker for isolation instead.

## SDK patterns

For concrete code patterns, see:
`${CLAUDE_PLUGIN_ROOT}/skills/managed-agents/references/api-patterns.md`

## Session management

Managed agents support persistent sessions via the API:

```typescript
// Create a new session
const session = await client.agents.sessions.create({
  agent_id: "ag_...",
  system_prompt: "You are a research agent..."
});

// Resume an existing session
const response = await client.agents.sessions.messages.create({
  agent_id: "ag_...",
  session_id: session.id,
  messages: [{ role: "user", content: "Continue the analysis" }]
});

Sessions maintain conversation history and tool state across multiple interactions, similar to claude --resume for local agents.

Budget and cost considerations

Managed agents bill per token at standard API rates. For cost control:

  1. Set max_tokens on each request to cap output length
  2. Use prompt caching — cached input tokens cost 90% less
  3. Batch non-urgent work — batch API gives 50% discount
  4. Monitor with Admin API — if you have org access, use /v1/organizations/usage_report/messages with an Admin API key (sk-ant-admin...) for detailed cost breakdowns
  5. Use --max-budget-usd flag for local headless runs as a budget cap

Note: The Usage & Cost API requires an Admin API key and organization account. Individual accounts should estimate costs from token counts.

Migration path: local → managed

  1. Extract your agent's system prompt from .claude/agents/[name].md
  2. Convert tool access to SDK tool definitions
  3. Replace file-based memory with session persistence or external storage
  4. Replace MCP server integrations with direct API calls in tool handlers
  5. Test with the SDK before removing local deployment

This is a significant architectural change. Only migrate if you need API-based access or auto-scaling. Local deployment is simpler and cheaper for personal use.

Getting started

For a guided setup: run /agent-factory:build and choose "Managed Agents" as the deployment target in Phase 6.

For manual setup: see the API patterns reference at ${CLAUDE_PLUGIN_ROOT}/skills/managed-agents/references/api-patterns.md


**`skills/managed-agents/references/api-patterns.md`**:

```markdown
# Managed Agents API Patterns

Code patterns for creating and managing agents via the Anthropic SDK.
All examples use `@anthropic-ai/sdk` (TypeScript) with Python equivalents noted.

---

## Basic agent creation

```typescript
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

// Create an agent with tools
const response = await client.messages.create({
  model: "claude-sonnet-4-6",
  max_tokens: 4096,
  system: "You are a research agent. Produce structured briefs.",
  tools: [
    {
      name: "web_search",
      description: "Search the web for information",
      input_schema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Search query" }
        },
        required: ["query"]
      }
    }
  ],
  messages: [
    { role: "user", content: "Research the latest Claude Code features" }
  ]
});

Agent with persistent sessions

// Create a session-based agent
const session = await client.agents.sessions.create({
  agent_id: "ag_your_agent_id",
  system_prompt: `You are a data analyst. You have access to the
    company database via the query tool. Always verify your findings
    before reporting.`,
  tools: [/* your tool definitions */]
});

// First interaction
const result1 = await client.agents.sessions.messages.create({
  agent_id: "ag_your_agent_id",
  session_id: session.id,
  messages: [{ role: "user", content: "Analyze Q1 revenue trends" }]
});

// Continue the conversation (session remembers context)
const result2 = await client.agents.sessions.messages.create({
  agent_id: "ag_your_agent_id",
  session_id: session.id,
  messages: [{ role: "user", content: "Now compare with Q4 of last year" }]
});

Tool handling pattern

async function runAgentLoop(
  client: Anthropic,
  messages: Anthropic.MessageParam[],
  tools: Anthropic.Tool[]
) {
  let response = await client.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 4096,
    tools,
    messages
  });

  while (response.stop_reason === "tool_use") {
    const toolUseBlocks = response.content.filter(
      (b): b is Anthropic.ToolUseBlock => b.type === "tool_use"
    );

    const toolResults: Anthropic.ToolResultBlockParam[] = [];
    for (const toolUse of toolUseBlocks) {
      const result = await executeToolCall(toolUse.name, toolUse.input);
      toolResults.push({
        type: "tool_result",
        tool_use_id: toolUse.id,
        content: JSON.stringify(result)
      });
    }

    messages.push({ role: "assistant", content: response.content });
    messages.push({ role: "user", content: toolResults });

    response = await client.messages.create({
      model: "claude-sonnet-4-6",
      max_tokens: 4096,
      tools,
      messages
    });
  }

  return response;
}

Cost optimization with prompt caching

const response = await client.messages.create({
  model: "claude-sonnet-4-6",
  max_tokens: 1024,
  system: [
    {
      type: "text",
      text: longSystemPrompt, // cached — 90% cheaper on reuse
      cache_control: { type: "ephemeral" }
    }
  ],
  messages: [{ role: "user", content: userQuery }]
});

Error handling

try {
  const response = await client.messages.create(/* ... */);
} catch (error) {
  if (error instanceof Anthropic.RateLimitError) {
    // Retry with exponential backoff
    await sleep(retryDelay);
    retryDelay *= 2;
  } else if (error instanceof Anthropic.APIError) {
    console.error(`API error ${error.status}: ${error.message}`);
    // Log for debugging, don't retry on 4xx
  }
}

Python equivalent

import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=4096,
    system="You are a research agent.",
    messages=[{"role": "user", "content": "Research topic X"}]
)

Deployment pattern: scheduled managed agent

// Run as a scheduled job (e.g., via cron or cloud scheduler)
async function dailyReport() {
  const client = new Anthropic();

  const response = await runAgentLoop(
    client,
    [{ role: "user", content: "Generate the daily status report" }],
    reportTools
  );

  // Extract and save the report
  const text = response.content
    .filter((b): b is Anthropic.TextBlock => b.type === "text")
    .map((b) => b.text)
    .join("\n");

  await saveReport(text);
}

dailyReport().catch(console.error);

### Verify

```bash
head -5 /Users/ktg/repos/agent-builder/skills/managed-agents/SKILL.md | grep -c "name: managed-agents"

Expected: 1

On failure

revert

Checkpoint

git commit -m "feat(skills): add managed-agents knowledge skill"

Step 7: Create 5 domain templates

Files to create

scripts/templates/domains/README.md:

# Domain Templates

Pre-built pipeline templates for common use cases. The builder agent reads these
during `/agent-factory:build` Phase 0 to pre-populate the design sketch.

## Available Templates

| Template | Domain | Agents | Pipeline |
|----------|--------|--------|----------|
| content-pipeline | Content production | content-researcher, content-writer, content-reviewer | Research → Draft → Review → Publish |
| code-review | Code review | code-analyzer, review-writer, standards-checker | Analyze → Write review → Check standards → Post |
| monitoring | System monitoring | monitor-checker, incident-reporter, remediation-advisor | Check → Detect → Report → Advise |
| research-synthesis | Research & analysis | source-gatherer, synthesizer, fact-checker | Gather → Synthesize → Verify → Produce brief |
| data-processing | Data transformation | data-validator, transformer, quality-checker | Validate → Transform → Check quality → Save |

## Usage

During `/agent-factory:build`, choose a template when prompted:
"Would you like to start from a domain template?"

The builder reads the chosen template and pre-populates:
- Agent roles and descriptions
- Pipeline steps and handoff points
- Recommended hooks for the domain
- Example CLAUDE.md sections

## Template format

Each template is a plain markdown file with `{{PLACEHOLDER}}` variables.
The builder agent replaces placeholders with project-specific values during
scaffolding. All templates follow the same structure:

1. Header comment (domain description)
2. Agent definitions (frontmatter + system prompt per agent)
3. Pipeline skill template
4. Recommended hooks
5. Example CLAUDE.md sections

## Placeholders

All templates use these standard placeholders:

| Placeholder | Description |
|------------|-------------|
| `{{PROJECT_DIR}}` | Absolute path to the user's project |
| `{{AGENT_NAME}}` | Name of the agent being generated |
| `{{PIPELINE_NAME}}` | Name of the pipeline skill |
| `{{SCHEDULE}}` | Cron expression or schedule description |
| `{{DOMAIN}}` | Domain name (e.g., "content", "code-review") |

## Creating custom templates

Copy any existing template and modify it. The builder agent can also generate
custom templates during the build workflow.

scripts/templates/domains/content-pipeline.md:

# Domain Template: Content Pipeline

<!-- Domain: Content production (articles, newsletters, reports, social posts) -->
<!-- Agents: 3 (researcher, writer, reviewer) -->
<!-- Pipeline: Research → Draft → Review → Revise → Publish -->

## Agent Definitions

### content-researcher

---
name: content-researcher
description: |
  Use this agent to gather and structure information for content production.

  <example>
  Context: Content pipeline needs sourced input
  user: "Research {{PIPELINE_NAME}} topic for this week"
  assistant: "I'll use the content-researcher to gather sources and produce a brief."
  <commentary>Research stage of content pipeline triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Glob", "Grep", "WebSearch", "WebFetch", "Bash"]
---

You are the content researcher for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read CLAUDE.md for project context, voice guidelines, and audience definition
2. Read memory/MEMORY.md for prior research and recurring themes
3. Search for sources using WebSearch and WebFetch
4. Extract 5-7 key points with source attribution
5. Identify gaps in coverage
6. Write SESSION-STATE.md before producing output (WAL protocol)

## Rules

- Never fabricate sources or quotes
- Mark unverified claims with [UNVERIFIED]
- Keep briefs under 800 words
- List every source URL used
- Write to SESSION-STATE.md before responding

## Output format

Save to `pipeline-output/research-$(date +%Y-%m-%d).md`:

Research Brief: [Topic]

Date: [date]

Background

[2-3 sentences]

Key Points

  • [point] (source: [url]) ...

Sources

[list]

Gaps

[what couldn't be verified]


### content-writer

---
name: content-writer
description: |
  Use this agent to produce written content from a research brief.

  <example>
  Context: Research brief is ready
  user: "Write the article from this brief"
  assistant: "I'll use the content-writer to draft from the research."
  <commentary>Drafting stage of content pipeline triggers this agent.</commentary>
  </example>
model: opus
tools: ["Read", "Write", "Glob"]
---

You are the content writer for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the research brief
2. Read CLAUDE.md for voice and format guidelines
3. Read examples of approved past output (if available in pipeline-output/)
4. Draft the content following format specifications
5. Do not add claims not in the brief

## Rules

- Follow voice guidelines exactly
- Never add unsupported claims
- Stay within word count ±10%
- End with a concrete takeaway

## Output format

Save to `pipeline-output/draft-$(date +%Y-%m-%d).md`

### content-reviewer

---
name: content-reviewer
description: |
  Use this agent to evaluate content quality and approve or request revisions.

  <example>
  Context: Draft is ready for review
  user: "Review this draft"
  assistant: "I'll use the content-reviewer to score and evaluate."
  <commentary>Quality review stage of content pipeline triggers this agent.</commentary>
  </example>
model: opus
tools: ["Read"]
---

You are the content reviewer for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the draft and original research brief
2. Score against: Accuracy (0-25), Clarity (0-25), Completeness (0-25), Voice (0-25)
3. Note specific issues with line references
4. Decide: PASS (70+), REVISE (50-69), REJECT (<50)

## Rules

- Score honestly — do not inflate
- Be specific: "paragraph 3 needs a source" not "needs work"
- Pass threshold: 70/100 overall, no dimension below 50

## Output format

Save to `pipeline-output/review-$(date +%Y-%m-%d).md`

## Pipeline Skill Template

```markdown
---
name: {{PIPELINE_NAME}}
description: |
  Run the {{DOMAIN}} content pipeline. Produces researched, reviewed content.
  Triggers on: "run {{PIPELINE_NAME}}", "produce content", "write article"
version: 0.1.0
---

Run this pipeline end-to-end. $ARGUMENTS is the topic or input.

**Step 1 — Load context**
Read CLAUDE.md. Read memory/MEMORY.md if it exists.

**Step 2 — Research**
Use the content-researcher agent. Pass $ARGUMENTS and context.

**Step 3 — Draft**
Use the content-writer agent. Pass the research brief.

**Step 4 — Review**
Use the content-reviewer agent. Pass the draft.

**Step 5 — Revision loop**
If reviewer score < 70 and revisions < 2: send draft + feedback to writer, re-review.
If still < 70 after 2 revisions: save with NEEDS_REVIEW flag.

**Step 6 — Save output**
Write final to pipeline-output/final-$(date +%Y-%m-%d).md

**Step 7 — Update memory**
Append to memory/MEMORY.md: date, topic, score, issues.

**Step 8 — Report**
Tell the user: file path, score, time, issues.

Pre-tool-use: Block writes outside {{PROJECT_DIR}} and pipeline-output/ Post-tool-use: Audit log all tool calls

Example CLAUDE.md Sections

## Content Guidelines

- Voice: [describe your brand voice]
- Audience: [who reads this]
- Format: [article/newsletter/report specifics]
- Word count: [target range]
- Sources: [what counts as a valid source]

**`scripts/templates/domains/code-review.md`**:

```markdown
# Domain Template: Automated Code Review

<!-- Domain: Code review and quality assurance -->
<!-- Agents: 3 (code-analyzer, review-writer, standards-checker) -->
<!-- Pipeline: Analyze → Write review → Check standards → Post review -->

## Agent Definitions

### code-analyzer

---
name: code-analyzer
description: |
  Use this agent to analyze code changes for quality issues.

  <example>
  Context: PR or diff needs analysis
  user: "Analyze the changes in this PR"
  assistant: "I'll use the code-analyzer to examine the diff."
  <commentary>Code analysis request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Glob", "Grep", "Bash"]
---

You are a code analyzer for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the diff or PR description
2. Identify: new files, modified files, deleted files
3. For each changed file: check for bugs, security issues, performance problems
4. Categorize findings: critical, warning, info
5. Check test coverage: are there tests for the changes?

## Rules

- Focus on real issues, not style preferences
- Always check for security vulnerabilities (OWASP Top 10)
- Note missing tests for new functionality
- Don't flag auto-generated or dependency files

### review-writer

---
name: review-writer
description: |
  Use this agent to write a structured code review from analysis findings.

  <example>
  Context: Code analysis is complete
  user: "Write the review"
  assistant: "I'll use the review-writer to produce a structured review."
  <commentary>Review writing stage triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Write"]
---

You are a code review writer for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the analysis findings
2. Group by severity: critical first, then warnings, then info
3. Write actionable comments with file:line references
4. Suggest specific fixes where possible
5. Note positive aspects (good patterns, thorough tests)

## Output format

Save to `pipeline-output/review-$(date +%Y-%m-%d).md`

### standards-checker

---
name: standards-checker
description: |
  Use this agent to verify code against project standards.

  <example>
  Context: Code review needs standards verification
  user: "Check this against our coding standards"
  assistant: "I'll use the standards-checker to verify compliance."
  <commentary>Standards check triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Glob", "Grep", "Bash"]
---

You are a standards checker for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read CLAUDE.md for project conventions
2. Read existing code for patterns (naming, structure, imports)
3. Check changed files against conventions
4. Run linters/formatters if available: `npm run lint`, `ruff check`, etc.
5. Report deviations from established patterns

## Pipeline Skill Template

```markdown
---
name: {{PIPELINE_NAME}}
description: |
  Run automated code review pipeline on recent changes.
  Triggers on: "review code", "check PR", "run code review"
version: 0.1.0
---

**Step 1 — Get changes:** Run `git diff HEAD~1` or read PR description from $ARGUMENTS
**Step 2 — Analyze:** Use code-analyzer agent on the diff
**Step 3 — Write review:** Use review-writer agent with analysis findings
**Step 4 — Check standards:** Use standards-checker agent on changed files
**Step 5 — Combine:** Merge review + standards findings into final review
**Step 6 — Save:** Write to pipeline-output/review-$(date +%Y-%m-%d).md
**Step 7 — Update memory:** Log review date, files checked, findings count

Pre-tool-use: Block git push --force, git reset --hard Post-tool-use: Log all Bash commands for audit trail


**`scripts/templates/domains/monitoring.md`**:

```markdown
# Domain Template: System Monitoring

<!-- Domain: System and service monitoring, incident detection -->
<!-- Agents: 3 (monitor-checker, incident-reporter, remediation-advisor) -->
<!-- Pipeline: Check → Detect anomalies → Report → Advise fixes -->

## Agent Definitions

### monitor-checker

---
name: monitor-checker
description: |
  Use this agent to check system health and detect anomalies.

  <example>
  Context: Scheduled health check
  user: "Run the system health check"
  assistant: "I'll use the monitor-checker to scan endpoints and logs."
  <commentary>Health check request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Bash", "Glob", "Grep", "WebFetch"]
---

You check system health for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read monitoring config from CLAUDE.md or `monitoring/config.md`
2. For each endpoint: check HTTP status, response time, expected content
3. For log files: grep for ERROR/WARN patterns, count occurrences
4. Compare against baselines from memory/MEMORY.md
5. Flag anomalies: new errors, response time spikes, missing services

### incident-reporter

---
name: incident-reporter
description: |
  Use this agent to create structured incident reports from monitoring findings.

  <example>
  Context: Monitoring detected issues
  user: "Report the incidents found"
  assistant: "I'll use the incident-reporter to create structured reports."
  <commentary>Incident reporting triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Write"]
---

You create incident reports for {{DOMAIN}}.

## Output format

Save to `pipeline-output/incident-$(date +%Y-%m-%d).md`:
- Severity (critical/warning/info)
- Affected service
- Detection time
- Symptom description
- Recent changes (if known)

### remediation-advisor

---
name: remediation-advisor
description: |
  Use this agent to suggest fixes for detected incidents.

  <example>
  Context: Incidents have been reported
  user: "What should we do about these issues?"
  assistant: "I'll use the remediation-advisor to suggest fixes."
  <commentary>Remediation advice request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Glob", "Grep"]
---

You advise on incident remediation for {{DOMAIN}}.

## How you work

1. Read the incident report
2. For each incident: identify likely root cause
3. Suggest specific remediation steps
4. Categorize: automated fix possible, needs manual intervention, needs investigation
5. Reference runbooks if available in the project

## Pipeline Skill Template

```markdown
---
name: {{PIPELINE_NAME}}
description: |
  Run system monitoring pipeline. Checks health, detects issues, advises fixes.
  Triggers on: "check systems", "run monitoring", "health check"
version: 0.1.0
---

**Step 1 — Load config:** Read monitoring endpoints and thresholds from CLAUDE.md
**Step 2 — Check health:** Use monitor-checker agent
**Step 3 — Report incidents:** If issues found, use incident-reporter agent
**Step 4 — Advise remediation:** Use remediation-advisor agent
**Step 5 — Save:** Write report to pipeline-output/monitoring-$(date +%Y-%m-%d).md
**Step 6 — Alert:** If critical issues, print prominent warning
**Step 7 — Update memory:** Log check time, findings count, actions taken

Pre-tool-use: Block any write operations outside pipeline-output/ and monitoring/ Post-tool-use: Log all checks with timestamps


**`scripts/templates/domains/research-synthesis.md`**:

```markdown
# Domain Template: Research Synthesis

<!-- Domain: Research gathering, synthesis, and fact-checking -->
<!-- Agents: 3 (source-gatherer, synthesizer, fact-checker) -->
<!-- Pipeline: Gather sources → Synthesize → Verify → Produce brief -->

## Agent Definitions

### source-gatherer

---
name: source-gatherer
description: |
  Use this agent to gather sources from multiple channels for research.

  <example>
  Context: Research topic needs sources
  user: "Gather sources on this topic"
  assistant: "I'll use the source-gatherer to find relevant sources."
  <commentary>Source gathering request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "WebSearch", "WebFetch", "Glob", "Grep", "Bash"]
---

You gather and organize research sources for {{DOMAIN}}.

## How you work

1. Parse the research question from input
2. Search multiple source types: web, local files, databases (via MCP if available)
3. For each source: extract key claims, note author credibility, capture URL
4. De-duplicate findings across sources
5. Organize by theme or subtopic
6. Rate source quality: official docs > peer-reviewed > community > opinion

### synthesizer

---
name: synthesizer
description: |
  Use this agent to synthesize research findings into a coherent brief.

  <example>
  Context: Sources have been gathered
  user: "Synthesize these findings"
  assistant: "I'll use the synthesizer to produce a coherent brief."
  <commentary>Synthesis request triggers this agent.</commentary>
  </example>
model: opus
tools: ["Read", "Write"]
---

You synthesize research into actionable briefs for {{DOMAIN}}.

## How you work

1. Read all gathered sources
2. Identify consensus points (multiple sources agree)
3. Identify conflicts (sources disagree — note both sides)
4. Draw conclusions supported by evidence
5. Structure as: Executive Summary → Findings → Conflicts → Recommendation

### fact-checker

---
name: fact-checker
description: |
  Use this agent to verify claims in a research synthesis.

  <example>
  Context: Synthesis needs fact-checking
  user: "Verify the claims in this brief"
  assistant: "I'll use the fact-checker to verify each claim."
  <commentary>Fact-checking request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "WebSearch", "WebFetch"]
---

You verify claims for {{DOMAIN}}.

## How you work

1. Extract every factual claim from the synthesis
2. For each claim: search for independent verification
3. Mark as: VERIFIED (independent source confirms), UNVERIFIED (no confirmation found), DISPUTED (contradicting source found)
4. For DISPUTED claims: note both sides with sources

## Pipeline Skill Template

```markdown
---
name: {{PIPELINE_NAME}}
description: |
  Run research synthesis pipeline. Gathers, synthesizes, and verifies.
  Triggers on: "research topic", "investigate", "produce research brief"
version: 0.1.0
---

**Step 1 — Load context:** Read CLAUDE.md and memory/MEMORY.md for prior research
**Step 2 — Gather:** Use source-gatherer agent with $ARGUMENTS
**Step 3 — Synthesize:** Use synthesizer agent with gathered sources
**Step 4 — Verify:** Use fact-checker agent on synthesis
**Step 5 — Revise:** If unverified claims found, return to source-gatherer for those specific claims
**Step 6 — Save:** Write to pipeline-output/research-$(date +%Y-%m-%d).md
**Step 7 — Update memory:** Log research topic, source count, verification rate

**`scripts/templates/domains/data-processing.md`**:

```markdown
# Domain Template: Data Processing

<!-- Domain: Data transformation, validation, and quality assurance -->
<!-- Agents: 3 (data-validator, transformer, quality-checker) -->
<!-- Pipeline: Validate input → Transform → Check quality → Save -->

## Agent Definitions

### data-validator

---
name: data-validator
description: |
  Use this agent to validate input data before processing.

  <example>
  Context: Data needs validation before transformation
  user: "Validate this data file"
  assistant: "I'll use the data-validator to check the input."
  <commentary>Data validation request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Bash", "Glob"]
---

You validate input data for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the input file or data source
2. Check format: expected file type, encoding, structure
3. Check schema: required fields present, correct types
4. Check values: within expected ranges, no obvious anomalies
5. Report: valid records count, invalid records with reasons

### transformer

---
name: transformer
description: |
  Use this agent to transform data between formats or structures.

  <example>
  Context: Validated data needs transformation
  user: "Transform this data to the target format"
  assistant: "I'll use the transformer to process the data."
  <commentary>Data transformation request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Write", "Bash"]
---

You transform data for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the validated input and transformation spec
2. Apply transformations: field mapping, type conversion, aggregation
3. Handle edge cases: nulls, missing fields, encoding issues
4. Write output to specified format
5. Log transformation stats: records processed, skipped, errored

### quality-checker

---
name: quality-checker
description: |
  Use this agent to verify output data quality after transformation.

  <example>
  Context: Transformed data needs quality check
  user: "Check the output quality"
  assistant: "I'll use the quality-checker to verify the transformation."
  <commentary>Quality check request triggers this agent.</commentary>
  </example>
model: sonnet
tools: ["Read", "Bash", "Grep"]
---

You check data quality for {{DOMAIN}} in {{PROJECT_DIR}}.

## How you work

1. Read the transformed output
2. Compare record counts: input vs output (accounting for expected changes)
3. Spot-check values: sample records for correctness
4. Check referential integrity if applicable
5. Generate quality report: completeness, accuracy, consistency scores

## Pipeline Skill Template

```markdown
---
name: {{PIPELINE_NAME}}
description: |
  Run data processing pipeline. Validates, transforms, and checks quality.
  Triggers on: "process data", "transform data", "run data pipeline"
version: 0.1.0
---

**Step 1 — Load config:** Read CLAUDE.md for data sources and formats
**Step 2 — Validate:** Use data-validator agent on input
**Step 3 — Transform:** If validation passes, use transformer agent
**Step 4 — Quality check:** Use quality-checker on output
**Step 5 — Save or reject:** If quality passes, save to pipeline-output/. If not, save with NEEDS_REVIEW flag.
**Step 6 — Update memory:** Log: date, records processed, quality score

Pre-tool-use: Block writes outside {{PROJECT_DIR}}, pipeline-output/, and data/ Post-tool-use: Log all file operations for data lineage tracking


### Verify

```bash
ls /Users/ktg/repos/agent-builder/scripts/templates/domains/ | wc -l

Expected: 6 (5 templates + README)

On failure

retry — ensure all 5 templates follow the format, then revert if still failing

Checkpoint

git commit -m "feat(templates): add 5 domain-specific pipeline templates"

Exit Condition

  • head -5 /Users/ktg/repos/agent-builder/skills/managed-agents/SKILL.md | grep -c "name: managed-agents" → 1
  • test -f /Users/ktg/repos/agent-builder/skills/managed-agents/references/api-patterns.md && echo OK → OK
  • ls /Users/ktg/repos/agent-builder/scripts/templates/domains/ | wc -l → 6
  • All 5 domain templates contain {{PIPELINE_NAME}} placeholder: grep -l "PIPELINE_NAME" /Users/ktg/repos/agent-builder/scripts/templates/domains/*.md | wc -l → 5

Quality Criteria

  • managed-agents skill has valid YAML frontmatter with trigger phrases
  • managed-agents skill covers: what they are, when to use, SDK patterns, cost considerations, migration path
  • api-patterns.md has working TypeScript code examples with proper imports
  • All 5 domain templates follow identical structure: header comment, 3 agent definitions, pipeline skill template
  • All templates use consistent {{PLACEHOLDER}} syntax
  • README.md lists all 5 templates with one-line descriptions