Address findings from pedagogical review simulating a non-expert user: - Add CLAUDE.md to project root (was referenced but missing) - Fix README score from 12/9/1 to 13/8/1 (match feature-map.md) - Add Expected Output sections to examples 01, 02, 05, 09, 10 - Create pipeline-output/ and briefings/ directories - Add example ordering guidance in README - Add plan requirements for examples 11/13 in prerequisites - Add skill frontmatter explanation in GETTING-STARTED.md - Explain Cowork/Dispatch with links in cowork-integration - Expand .gitignore with node_modules and generated output files - Add model override hints in agent frontmatter comments Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
78 lines
2.3 KiB
Markdown
78 lines
2.3 KiB
Markdown
# Example 01: Agent Runtime
|
|
|
|
**Capability:** Claude Code executes tools autonomously, streams output as it works,
|
|
and loops until the task is complete. No user input needed between steps.
|
|
|
|
**OpenClaw equivalent:** Long-running daemon with tool execution and streaming output.
|
|
|
|
---
|
|
|
|
## The Prompt
|
|
|
|
```
|
|
Research the top 3 AI frameworks released this month, compare their GitHub stars,
|
|
and write a summary to research-output.md
|
|
|
|
For each framework include:
|
|
- Name and release date
|
|
- GitHub URL and current star count
|
|
- One sentence on what problem it solves
|
|
- Your verdict on whether it's worth watching
|
|
|
|
End the file with a "Verdict" section that ranks all three.
|
|
```
|
|
|
|
---
|
|
|
|
## What Happens
|
|
|
|
Claude Code will:
|
|
|
|
1. Use WebSearch to find AI framework releases from the current month
|
|
2. Use WebFetch to retrieve GitHub pages and extract star counts
|
|
3. Loop through each framework, gathering data tool call by tool call
|
|
4. Use Write to create `research-output.md` with the structured summary
|
|
5. Report completion with a summary of what was written
|
|
|
|
You will see each tool call streamed as it happens. Claude does not ask for
|
|
confirmation between steps unless it hits something ambiguous.
|
|
|
|
---
|
|
|
|
## Expected Output
|
|
|
|
After 30-60 seconds, you should see a new file `research-output.md` in the
|
|
project root. It will look something like this (content varies by month):
|
|
|
|
```markdown
|
|
# AI Frameworks Released This Month
|
|
|
|
## 1. ExampleFramework
|
|
- **Released:** March 12, 2026
|
|
- **GitHub:** https://github.com/example/framework (4,200 stars)
|
|
- **What it solves:** Simplifies multi-agent orchestration for Python developers.
|
|
- **Verdict:** Worth watching. Growing fast with strong community momentum.
|
|
|
|
## 2. ...
|
|
|
|
## Verdict
|
|
1. ExampleFramework - most practical for production use
|
|
2. ...
|
|
```
|
|
|
|
**How you know it worked:**
|
|
- A file called `research-output.md` exists in the project root
|
|
- It contains 3 frameworks with star counts and URLs
|
|
- It ends with a ranked Verdict section
|
|
- You saw WebSearch and WebFetch tool calls streaming in the terminal
|
|
|
|
---
|
|
|
|
## Why This Matters
|
|
|
|
This is the agent loop in action: plan, execute, observe, repeat. The same
|
|
loop that runs a 300-step pipeline also runs this 5-step research task.
|
|
The difference is only scale.
|
|
|
|
Claude Code v2.1.84 added adaptive thinking, which adjusts reasoning depth
|
|
automatically. Complex sub-tasks get more thought; simple ones proceed fast.
|