40 files demonstrating every major OpenClaw capability using Claude Code: - 3 agents (researcher, writer, reviewer) - 3 skills (daily-briefing, slack-message, web-research) - 2 security hooks (pre-tool-use blocker, post-tool-use logger) - 10 self-contained examples with copy-paste prompts - Complete feature map (20 capabilities, 11 full match, 7 different, 2 gap) - Security docs including NemoClaw comparison - Automation, messaging, browser, memory documentation Zero dependencies. Clone and run. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2.4 KiB
Example 10: Full Pipeline
Capability: A complete workflow combining web search, multi-agent orchestration, file I/O, memory, hooks, and logging in a single Claude Code run.
OpenClaw equivalent: End-to-end agent pipeline with messaging, skills, and hooks.
What This Demonstrates
Every capability from examples 01-09 working together:
| Step | Capability | Example |
|---|---|---|
| 1 | Web search | 03-web-search |
| 2 | Researcher agent | 06-multi-agent |
| 3 | Writer agent | 06-multi-agent |
| 4 | Reviewer agent | 06-multi-agent |
| 5 | File I/O | 02-shell-and-files |
| 6 | Memory update | 05-memory-system |
| 7 | Security logging | 09-security-hooks |
| 8 | Agent runtime loop | 01-agent-runtime |
The Prompt
Run a full research-to-output pipeline on the topic: "How Claude Code handles
permission modes: plan, autoEdit, and bypassPermissions"
Pipeline steps:
1. Use the researcher agent to gather information from the web and official docs
2. Pass the research to the writer agent to draft a 400-word explainer
3. Send the draft to the reviewer agent for accuracy and clarity feedback
4. Incorporate the reviewer's feedback into a final version
5. Save the final version to pipeline-output/permission-modes.md
6. Append a pipeline execution summary to memory/pipeline-log.md with:
- Date and time
- Topic researched
- Word count of final output
- Any issues encountered
7. Show me the first 10 lines of the output file to confirm everything worked
What Happens
Claude Code will coordinate the full pipeline autonomously:
- The researcher agent uses WebSearch and WebFetch
- The writer agent produces structured prose
- The reviewer agent critiques and returns specific feedback
- A revision loop runs if needed (bounded by maxTurns in settings.json)
- File writes are intercepted by PostToolUse for audit logging
- Memory is updated so the next session knows this pipeline ran
You will see each agent invocation streamed in sequence. The entire pipeline typically completes in 2-4 minutes depending on web fetch latency.
Why This Matters
This is what Claude Code looks like as an actual agent platform, not a
chat assistant. The same architecture, with different prompts and agents,
runs the article production pipeline at fromaitochitta.com.
The companion repo you are reading is the minimal version of that setup. Clone it, open Claude Code, and run this prompt to see the full stack work.