Add three new sections to all 14 examples: - "Carry Forward": what output feeds into later examples (01-10) - "The Cumulative Path": alternative prompt building on previous output (02-10) - "Now Try It Yourself": personalized template with transferable pattern (all) - "Building On" callout connecting back to previous examples (02-10) Add Example 14: Build Your Personal Agent - capstone that guides reader through writing their own CLAUDE.md, creating a personal skill, connecting a messaging channel, setting up automation, and testing end-to-end. Update README with cumulative path diagram, two usage modes, and example 14. Update GETTING-STARTED.md with cross-references to relevant examples. 17 files changed, 703+ lines added. The examples now form a coherent learning path from "see what it can do" to "build your own agent." Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3.5 KiB
Example 01: Agent Runtime
Capability: Claude Code executes tools autonomously, streams output as it works, and loops until the task is complete. No user input needed between steps.
OpenClaw equivalent: Long-running daemon with tool execution and streaming output.
The Prompt
Research the top 3 AI frameworks released this month, compare their GitHub stars,
and write a summary to research-output.md
For each framework include:
- Name and release date
- GitHub URL and current star count
- One sentence on what problem it solves
- Your verdict on whether it's worth watching
End the file with a "Verdict" section that ranks all three.
What Happens
Claude Code will:
- Use WebSearch to find AI framework releases from the current month
- Use WebFetch to retrieve GitHub pages and extract star counts
- Loop through each framework, gathering data tool call by tool call
- Use Write to create
research-output.mdwith the structured summary - Report completion with a summary of what was written
You will see each tool call streamed as it happens. Claude does not ask for confirmation between steps unless it hits something ambiguous.
Expected Output
After 30-60 seconds, you should see a new file research-output.md in the
project root. It will look something like this (content varies by month):
# AI Frameworks Released This Month
## 1. ExampleFramework
- **Released:** March 12, 2026
- **GitHub:** https://github.com/example/framework (4,200 stars)
- **What it solves:** Simplifies multi-agent orchestration for Python developers.
- **Verdict:** Worth watching. Growing fast with strong community momentum.
## 2. ...
## Verdict
1. ExampleFramework - most practical for production use
2. ...
How you know it worked:
- A file called
research-output.mdexists in the project root - It contains 3 frameworks with star counts and URLs
- It ends with a ranked Verdict section
- You saw WebSearch and WebFetch tool calls streaming in the terminal
Why This Matters
This is the agent loop in action: plan, execute, observe, repeat. The same loop that runs a 300-step pipeline also runs this 5-step research task. The difference is only scale.
Claude Code v2.1.84 added adaptive thinking, which adjusts reasoning depth automatically. Complex sub-tasks get more thought; simple ones proceed fast.
Carry Forward
You just built research-output.md. Hold onto it.
- Example 02 organizes raw output like this into a structured project
- Example 06 sends research through a multi-agent review cycle
- Example 10 uses this exact capability as step 1 of a full pipeline
Every example builds on the same core loop: Claude plans, executes tools, observes results, and repeats until done. The task changes. The pattern does not.
Now Try It Yourself
The demo researched AI frameworks. Replace the topic with something you actually need:
Research the top 3 [your topic] this [time period]. For each, include:
- [what you need to know]
- [specific data points: URLs, numbers, dates]
- Your verdict on [your evaluation criteria]
Write the summary to research-output.md with a ranked Verdict section at the end.
The pattern you just learned: research question + output structure + file destination. Claude handles everything between the question and the written answer.
Ideas worth trying:
- Competitors in your market and their latest announcements
- Tools for a workflow you want to improve
- Developments in a technology you are evaluating for work