feat: initial companion repo for OpenClaw vs Claude Code article
40 files demonstrating every major OpenClaw capability using Claude Code: - 3 agents (researcher, writer, reviewer) - 3 skills (daily-briefing, slack-message, web-research) - 2 security hooks (pre-tool-use blocker, post-tool-use logger) - 10 self-contained examples with copy-paste prompts - Complete feature map (20 capabilities, 11 full match, 7 different, 2 gap) - Security docs including NemoClaw comparison - Automation, messaging, browser, memory documentation Zero dependencies. Clone and run. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
commit
2491f5c732
40 changed files with 2037 additions and 0 deletions
49
examples/01-agent-runtime/prompt.md
Normal file
49
examples/01-agent-runtime/prompt.md
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
# Example 01: Agent Runtime
|
||||
|
||||
**Capability:** Claude Code executes tools autonomously, streams output as it works,
|
||||
and loops until the task is complete. No user input needed between steps.
|
||||
|
||||
**OpenClaw equivalent:** Long-running daemon with tool execution and streaming output.
|
||||
|
||||
---
|
||||
|
||||
## The Prompt
|
||||
|
||||
```
|
||||
Research the top 3 AI frameworks released this month, compare their GitHub stars,
|
||||
and write a summary to research-output.md
|
||||
|
||||
For each framework include:
|
||||
- Name and release date
|
||||
- GitHub URL and current star count
|
||||
- One sentence on what problem it solves
|
||||
- Your verdict on whether it's worth watching
|
||||
|
||||
End the file with a "Verdict" section that ranks all three.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Happens
|
||||
|
||||
Claude Code will:
|
||||
|
||||
1. Use WebSearch to find AI framework releases from the current month
|
||||
2. Use WebFetch to retrieve GitHub pages and extract star counts
|
||||
3. Loop through each framework, gathering data tool call by tool call
|
||||
4. Use Write to create `research-output.md` with the structured summary
|
||||
5. Report completion with a summary of what was written
|
||||
|
||||
You will see each tool call streamed as it happens. Claude does not ask for
|
||||
confirmation between steps unless it hits something ambiguous.
|
||||
|
||||
---
|
||||
|
||||
## Why This Matters
|
||||
|
||||
This is the agent loop in action: plan, execute, observe, repeat. The same
|
||||
loop that runs a 300-step pipeline also runs this 5-step research task.
|
||||
The difference is only scale.
|
||||
|
||||
Claude Code v2.1.84 added adaptive thinking, which adjusts reasoning depth
|
||||
automatically. Complex sub-tasks get more thought; simple ones proceed fast.
|
||||
Loading…
Add table
Add a link
Reference in a new issue