Add three new sections to all 14 examples: - "Carry Forward": what output feeds into later examples (01-10) - "The Cumulative Path": alternative prompt building on previous output (02-10) - "Now Try It Yourself": personalized template with transferable pattern (all) - "Building On" callout connecting back to previous examples (02-10) Add Example 14: Build Your Personal Agent - capstone that guides reader through writing their own CLAUDE.md, creating a personal skill, connecting a messaging channel, setting up automation, and testing end-to-end. Update README with cumulative path diagram, two usage modes, and example 14. Update GETTING-STARTED.md with cross-references to relevant examples. 17 files changed, 703+ lines added. The examples now form a coherent learning path from "see what it can do" to "build your own agent." Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3.8 KiB
Example 03: Web Search and Fetch
Capability: Claude Code can search the web and fetch full page content as part of any task. Both tools are built-in; no MCP server required.
OpenClaw equivalent: Brave Search API + web_fetch with Firecrawl fallback.
Building on Examples 01-02. You have raw research (01) organized into a report structure (02). This example adds the critical layer: verified, sourced information from the live web. In the Cumulative Path below, you enrich your existing research with fresh data.
The Prompt
Search for the latest Claude Code release notes, summarize the 5 most impactful
features added in recent versions, and write the summary to changelog-summary.md
For each feature include:
- Feature name and the version it shipped in
- One sentence on what it does
- One sentence on why it matters for agent workflows
At the top of the file add a "Researched on" line with today's date and the
source URLs you used.
What Happens
Claude Code will:
- Use WebSearch to find Claude Code release notes and changelog pages
- Use WebFetch to retrieve the full changelog content from Anthropic's docs
- Parse and rank features by impact on agent workflows
- Use Write to create
changelog-summary.mdwith source attribution - Confirm the file was written
Why This Matters
Web search makes Claude Code's knowledge current. The training cutoff becomes less relevant when Claude can verify facts before writing them down.
The combination of WebSearch (for discovery) and WebFetch (for depth) mirrors
what a human researcher does: scan headlines, then read the full article.
This is the same pattern used in the PREP phase of the article production
pipeline that drives the fromaitochitta.com content workflow.
Note on Sources
Claude Code cites sources when writing research outputs. If you need full traceability, add "include the exact URL next to each claim" to the prompt.
Carry Forward
You now have verified, sourced research. This is the foundation for everything ahead:
- Example 05 persists this research state to memory for future sessions
- Example 06 runs multi-agent review on your accumulated findings
- Example 10 makes web search the first step of every pipeline run
The combination of WebSearch (for discovery) and WebFetch (for depth) is the most reusable pattern in this repo. You will use it in almost every real workflow.
The Cumulative Path
If you ran Examples 01-02, you have organized research in
pipeline-output/research-report/. This prompt verifies and enriches it.
Read pipeline-output/research-report/findings.md. For each item listed:
1. Search the web for the latest data (star counts, release dates, status)
2. Verify the original claims still hold
3. Add a "Verified: [today's date]" or "Changed: [what changed]" line
Update findings.md in place. Add any new source URLs to sources.md.
At the bottom of findings.md, add a "Last verified: [date]" timestamp.
After running this, your research report has been fact-checked and timestamped. It is now reliable enough to share.
Now Try It Yourself
Replace the demo topic with a research task from your actual work:
Search for the latest information about [your topic]. Summarize the
[number] most important findings with:
- [data point 1]
- [data point 2]
- Source URL for each claim
Write to [your-file].md with a "Researched on [date]" line at the top
and source URLs listed at the bottom.
The pattern you just learned: search query + output structure + source attribution. Adding "include the exact URL next to each claim" to any prompt makes Claude cite its sources.
Ideas worth trying:
- Latest pricing and features of tools you are comparing
- Regulatory changes affecting your industry
- What competitors announced this quarter