1
0
Fork 0

feat: make examples cumulative with carry-forward chain and capstone

Add three new sections to all 14 examples:
- "Carry Forward": what output feeds into later examples (01-10)
- "The Cumulative Path": alternative prompt building on previous output (02-10)
- "Now Try It Yourself": personalized template with transferable pattern (all)
- "Building On" callout connecting back to previous examples (02-10)

Add Example 14: Build Your Personal Agent - capstone that guides reader
through writing their own CLAUDE.md, creating a personal skill, connecting
a messaging channel, setting up automation, and testing end-to-end.

Update README with cumulative path diagram, two usage modes, and example 14.
Update GETTING-STARTED.md with cross-references to relevant examples.

17 files changed, 703+ lines added. The examples now form a coherent
learning path from "see what it can do" to "build your own agent."

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Kjell Tore Guttormsen 2026-03-26 21:14:35 +01:00
commit 0d0b83f98c
17 changed files with 979 additions and 11 deletions

View file

@ -24,7 +24,7 @@ ecosystem.
## Project structure
```
examples/ 13 numbered examples, each with a prompt.md
examples/ 14 numbered examples, each with a prompt.md
security/ Permission modes, Auto Mode, hooks, NemoClaw comparison
memory/ How cross-session memory works
automation/ Cron, launchd, /loop, /schedule
@ -43,7 +43,9 @@ This file (CLAUDE.md) is for instructions. MEMORY.md is for state.
## Rules for this project
- This is a demo repo. Do not add dependencies or build steps.
- Keep examples self-contained. Each should work independently.
- Every example has a standalone demo prompt AND a cumulative path prompt.
Both must work. The standalone prompt requires no prior examples.
The cumulative prompt builds on output from previous examples.
- All output files go in `pipeline-output/` or `briefings/`.
- Do not modify files in `security/` or `hooks/` without reviewing
the security implications.

View file

@ -10,6 +10,18 @@ about an hour of setup and a week of use.
Here is exactly what to do.
### How this connects to the examples
The `examples/` directory shows what Claude Code can do. This guide
shows how to make it do that for *you*. Each step below references
the relevant examples so you can see the capability in action before
personalizing it.
If you want the full guided path, run the examples 01-10 first
(follow the Cumulative Path in each one), then come here to build
your permanent setup. Or jump straight to [Example 14](examples/14-build-your-agent/prompt.md),
which condenses this guide into one hands-on session.
---
## Step 1: Make Claude Code know you (30 minutes)
@ -59,6 +71,10 @@ your work changes.
| List what is off-limits | Assume Claude will guess |
| Update it weekly | Write it once and forget it |
> **See it in action:** [Example 05](examples/05-memory-system/prompt.md)
> shows how CLAUDE.md drives Claude's behavior. [Example 14](examples/14-build-your-agent/prompt.md)
> walks you through writing your own from scratch.
---
## Step 2: Set up your phone channel (10 minutes)
@ -92,6 +108,11 @@ claude --channels
Send a test message from your phone. If Claude responds,
you are connected.
> **See it in action:** [Example 07](examples/07-messaging/prompt.md)
> demonstrates messaging with Telegram and Slack.
> [Example 12](examples/12-remote-control/prompt.md) covers all three
> remote access methods (Channels, Dispatch, Remote Control).
---
## Step 3: Keep your session alive (5 minutes)
@ -228,6 +249,11 @@ A skill is worth writing when you:
Start with 2-3 skills. Add more as you notice patterns.
> **See it in action:** [Example 06](examples/06-multi-agent/prompt.md)
> shows the researcher-writer-reviewer agent pattern that makes complex
> skills powerful. The "Now Try It Yourself" section in each example
> helps you adapt demo capabilities into personal skills.
---
## Step 5: Add your tools via MCP (15 minutes per tool)
@ -270,6 +296,10 @@ Each MCP server you add is a new capability. Claude
automatically discovers what tools are available and uses
them when relevant.
> **See it in action:** [Example 04](examples/04-browser-automation/prompt.md)
> demonstrates Playwright MCP for browser automation.
> [Example 07](examples/07-messaging/prompt.md) shows Slack MCP in action.
---
## Step 6: Let it learn (ongoing)
@ -369,4 +399,6 @@ you work.
| Automate a routine task | Step 4 (write a skill) |
| Connect a new service | Step 5 (add MCP server) |
| Try the demo examples | See `examples/` directory |
| Follow the cumulative path | Start at `examples/01-agent-runtime/` |
| Build your personal agent | See `examples/14-build-your-agent/` |
| Understand the OpenClaw comparison | See `feature-map.md` |

View file

@ -16,7 +16,8 @@ those 22, with 13 full matches and 8 different approaches. One gap
remains: Canvas/A2UI.
This is not a theoretical comparison. Clone this repo, open Claude
Code, and try each example yourself.
Code, and try each example yourself. By Example 14, you will have
built your own personal agent.
## Prerequisites
@ -34,10 +35,10 @@ No npm install. No Docker. No build step.
**"Show me what it can do"** - Browse `examples/`, read `feature-map.md`,
try the demo prompts. You will understand what Claude Code is capable of.
**"Help me actually use it"** - Read **[GETTING-STARTED.md](GETTING-STARTED.md)**.
Six concrete steps that take you from demo to personal daily driver:
personalize CLAUDE.md, set up phone access, write your own skills,
and build the setup into how you actually work. About one hour total.
**"Help me actually use it"** - Follow the **Cumulative Path** through
examples 01-14. Each example builds on the previous one. By the end,
you have a working personal agent. Or read **[GETTING-STARTED.md](GETTING-STARTED.md)**
for the condensed version: six steps, about one hour total.
## Quickstart (demo mode)
@ -97,14 +98,23 @@ Each example includes an "Expected Output" section so you know what to look for.
| 11 | Computer Use | Control desktop apps | macOS/iOS/Android apps |
| 12 | Remote Access | Channels + Dispatch + /rc (3 ways) | Telegram/WhatsApp control |
| 13 | Auto Mode | AI safety classifier | Autonomous daemon mode |
| 14 | **Build Your Agent** | All capabilities combined | Your personal setup |
Each example has a self-contained prompt you can paste directly
into Claude Code.
**Recommended path:** Examples 01-09 are independent and work in any order.
Example 10 combines all of them into a single pipeline. Examples 11-13 require
additional setup (Desktop app, specific subscription plans) and are documented
separately.
**Two ways to use the examples:**
1. **Independent mode.** Pick any example and run it. Every demo prompt works standalone.
2. **Cumulative path (recommended).** Follow examples 01-14 in order. Each one
has a "Cumulative Path" section with an alternative prompt that builds on the
previous example's output. By example 10, you have a complete automated pipeline.
By example 14, you have a personal agent configured for your actual work.
Examples 11-13 require additional setup (Desktop app, specific subscription
plans) and are documented separately. Example 14 works after any subset of
01-10.
## The feature map
@ -113,6 +123,30 @@ comparison table with verdicts and version requirements.
**Summary:** 13 full match, 8 different approach, 1 gap.
## The cumulative path
The fastest way to learn Claude Code is to build something real with it.
The examples are designed so each one adds one capability to an
accumulating pipeline:
```
01 Research --> raw data
02 Organize --> structured report
03 Verify --> sourced, fact-checked
04 Browser --> live visual data (optional)
05 Memory --> persistent across sessions
06 Multi-agent --> polished, reviewed output
07 Messaging --> delivered to your phone
08 Automation --> runs on schedule
09 Security --> protected by hooks
10 Full pipeline --> everything combined
14 Your agent --> personalized for your work
```
Each example has a "Carry Forward" section (what your output feeds into
next) and a "Now Try It Yourself" section (how to adapt the pattern for
your own needs). Start at 01 and follow the thread.
## The broader ecosystem
Claude Code is one part of Anthropic's answer to OpenClaw:

View file

@ -76,3 +76,37 @@ The difference is only scale.
Claude Code v2.1.84 added adaptive thinking, which adjusts reasoning depth
automatically. Complex sub-tasks get more thought; simple ones proceed fast.
---
## Carry Forward
You just built `research-output.md`. Hold onto it.
- **Example 02** organizes raw output like this into a structured project
- **Example 06** sends research through a multi-agent review cycle
- **Example 10** uses this exact capability as step 1 of a full pipeline
Every example builds on the same core loop: Claude plans, executes tools, observes results, and repeats until done. The task changes. The pattern does not.
---
## Now Try It Yourself
The demo researched AI frameworks. Replace the topic with something you actually need:
```
Research the top 3 [your topic] this [time period]. For each, include:
- [what you need to know]
- [specific data points: URLs, numbers, dates]
- Your verdict on [your evaluation criteria]
Write the summary to research-output.md with a ranked Verdict section at the end.
```
**The pattern you just learned:** research question + output structure + file destination. Claude handles everything between the question and the written answer.
Ideas worth trying:
- Competitors in your market and their latest announcements
- Tools for a workflow you want to improve
- Developments in a technology you are evaluating for work

View file

@ -5,6 +5,8 @@ same task. Shell output becomes input to the next step automatically.
**OpenClaw equivalent:** `exec` tool with PTY support + read/write/edit file tools.
> **Building on Example 01.** You produced `research-output.md` with raw research data. This example shows how Claude organizes files and runs shell commands. In the Cumulative Path below, you turn that raw research into a structured report directory.
---
## The Prompt
@ -76,3 +78,57 @@ The permission system controls what shell commands are allowed. In the
`settings.json` in this repo, `rm -rf` and a handful of other destructive
patterns are blocked by the `pre-tool-use.sh` hook before execution reaches
the shell. See `examples/09-security-hooks/` for details.
---
## Carry Forward
You now know how to create directories, write files, verify structure, and read back content. Every example from here uses this foundation:
- **Example 03** writes enriched research files with source attribution
- **Example 05** writes memory files that persist across sessions
- **Example 10** writes pipeline output and execution logs
---
## The Cumulative Path
> If you ran Example 01, you have `research-output.md`. This prompt turns
> it into a structured report.
```
Read research-output.md. Create a directory called 'pipeline-output/research-report/'
and organize the research into three files:
1. README.md - topic overview, date researched, number of items found
2. findings.md - the research content reformatted with proper headings,
one section per item
3. sources.md - all URLs and references extracted into a clean list
Then run 'find pipeline-output/research-report/ -type f' to verify all
three files exist. Show me the contents of README.md.
```
After running this, your raw research is organized into a report that agents can work with in later examples.
---
## Now Try It Yourself
Replace the demo scaffold with a project structure you actually need:
```
Create a directory called '[your-project-name]/' with:
1. [file 1 with specific content]
2. [file 2 with specific content]
3. [file 3 with specific content]
Verify the structure and show me [the most important file].
```
**The pattern you just learned:** describe the directory structure + file contents + verification step. Claude creates it all in one pass and confirms.
Ideas worth trying:
- A blog post draft with frontmatter and image placeholders
- A report template with sections matching your company format
- A project kickoff folder with charter, timeline, and stakeholder list

View file

@ -5,6 +5,8 @@ any task. Both tools are built-in; no MCP server required.
**OpenClaw equivalent:** Brave Search API + web_fetch with Firecrawl fallback.
> **Building on Examples 01-02.** You have raw research (01) organized into a report structure (02). This example adds the critical layer: verified, sourced information from the live web. In the Cumulative Path below, you enrich your existing research with fresh data.
---
## The Prompt
@ -52,3 +54,58 @@ pipeline that drives the `fromaitochitta.com` content workflow.
Claude Code cites sources when writing research outputs. If you need full
traceability, add "include the exact URL next to each claim" to the prompt.
---
## Carry Forward
You now have verified, sourced research. This is the foundation for everything ahead:
- **Example 05** persists this research state to memory for future sessions
- **Example 06** runs multi-agent review on your accumulated findings
- **Example 10** makes web search the first step of every pipeline run
The combination of WebSearch (for discovery) and WebFetch (for depth) is the most reusable pattern in this repo. You will use it in almost every real workflow.
---
## The Cumulative Path
> If you ran Examples 01-02, you have organized research in
> `pipeline-output/research-report/`. This prompt verifies and enriches it.
```
Read pipeline-output/research-report/findings.md. For each item listed:
1. Search the web for the latest data (star counts, release dates, status)
2. Verify the original claims still hold
3. Add a "Verified: [today's date]" or "Changed: [what changed]" line
Update findings.md in place. Add any new source URLs to sources.md.
At the bottom of findings.md, add a "Last verified: [date]" timestamp.
```
After running this, your research report has been fact-checked and timestamped. It is now reliable enough to share.
---
## Now Try It Yourself
Replace the demo topic with a research task from your actual work:
```
Search for the latest information about [your topic]. Summarize the
[number] most important findings with:
- [data point 1]
- [data point 2]
- Source URL for each claim
Write to [your-file].md with a "Researched on [date]" line at the top
and source URLs listed at the bottom.
```
**The pattern you just learned:** search query + output structure + source attribution. Adding "include the exact URL next to each claim" to any prompt makes Claude cite its sources.
Ideas worth trying:
- Latest pricing and features of tools you are comparing
- Regulatory changes affecting your industry
- What competitors announced this quarter

View file

@ -5,6 +5,8 @@ screenshots, and extract structured data from live pages.
**OpenClaw equivalent:** CDP/Playwright browser automation with screenshot and act commands.
> **Building on Examples 01-03.** You have researched (01), organized (02), and verified (03) data using text-based tools. This example adds visual data capture from live web pages. In the Cumulative Path below, you capture a screenshot of a page referenced in your research.
---
## Prerequisites
@ -61,3 +63,45 @@ JavaScript-rendered content, login-protected pages, and interactive workflows.
OpenClaw bundles Playwright natively. Claude Code uses the same underlying
engine via MCP. The setup takes two minutes; the capability is identical.
---
## Carry Forward
Browser automation is your fallback for when WebSearch and WebFetch cannot reach the data: JavaScript-rendered pages, login walls, interactive content. Examples 07 (messaging) and 10 (full pipeline) can use browser-captured data as input.
---
## The Cumulative Path
> If you ran Examples 01-03, you have research with source URLs. This prompt
> captures live visual evidence for the report.
```
Read pipeline-output/research-report/findings.md. Pick the top-ranked item
and navigate to its GitHub page (or primary URL). Take a screenshot of the
main page. Save the screenshot and add a line to findings.md:
"Screenshot captured [today's date] - see [filename]"
```
This is optional in the cumulative flow. Not every pipeline needs browser automation. But when it does, this is how you add live visual data to research that was gathered via API.
---
## Now Try It Yourself
Replace the demo target with a page relevant to your work:
```
Navigate to [URL you need data from], take a screenshot, and extract
[the specific data you need] into a structured list.
Save to [your-file].md with a timestamp at the bottom.
```
**The pattern you just learned:** URL + extraction target + output file. Use browser automation when the data lives behind JavaScript rendering, login screens, or interactive elements that WebFetch cannot handle.
Ideas worth trying:
- Your company dashboard or analytics page
- A competitor's product page with pricing
- A government portal with public data tables

View file

@ -5,6 +5,8 @@ hierarchy of markdown files. What is written in one session is available in the
**OpenClaw equivalent:** Daily markdown logs + MEMORY.md + vector search (SQLite-vec).
> **Building on Examples 01-03.** You have produced research (01), organized it (02), and verified it (03). This example shows how to save that state so your next session picks up where you left off. Without memory, every session starts from scratch.
---
## How the Hierarchy Works
@ -85,3 +87,62 @@ The markdown approach is more inspectable: you can read, edit, and version contr
every piece of memory Claude has about your project.
The `memory/MEMORY.md` file in this repo shows the pattern at scale.
---
## Carry Forward
Memory is what turns a collection of examples into a persistent workflow. From here:
- **Example 06** uses the project context (loaded from CLAUDE.md) to guide agents
- **Example 08** relies on memory to know what cron jobs have run and what to do next
- **Example 10** writes pipeline execution logs to memory after every run
Without this capability, each session would start blind. With it, Claude knows what you did yesterday and what needs attention today.
---
## The Cumulative Path
> If you ran Examples 01-03, you have a research report in
> `pipeline-output/research-report/`. This prompt saves its state to memory.
```
Read this project's CLAUDE.md, then read all files in
pipeline-output/research-report/.
Create memory/research-state.md with:
- Today's date
- What topic was researched
- What files exist and their status (raw, verified, etc.)
- What the next step should be (multi-agent review in Example 06)
Then explain how this file will be available in the next Claude Code
session without re-reading the research files.
```
After running this, your pipeline has persistent state. If you close the session and come back tomorrow, Claude knows exactly where you left off.
---
## Now Try It Yourself
Replace the demo with a memory entry for your actual work:
```
Read CLAUDE.md and summarize the current state of [your project].
Create memory/[your-topic]-state.md with:
- Today's date
- Current status and recent progress
- Open questions or blockers
- Suggested next steps
Explain how this persists across sessions.
```
**The pattern you just learned:** read context + synthesize state + write to memory. Any information you want Claude to remember across sessions goes into a file referenced from CLAUDE.md.
Ideas worth trying:
- Project status notes that update automatically at end of each session
- A running log of decisions made and their rationale
- A list of recurring tasks and when they were last completed

View file

@ -5,6 +5,8 @@ parallel or in sequence, and combine their outputs into a final result.
**OpenClaw equivalent:** Sub-agents, agent-to-agent messaging, mesh workflows.
> **Building on Examples 01-05.** You have raw research (01), organized structure (02), verified data (03), and persistent state (05). Now three specialized agents turn that raw material into a polished, reviewed document. This is where the pipeline produces something you can actually share.
---
## Prerequisites
@ -55,3 +57,62 @@ its own working directory, preventing file conflicts in parallel runs.
The researcher-writer-reviewer pattern is the same loop that produces articles
for `fromaitochitta.com`. The agents here are minimal versions of that pipeline.
---
## Carry Forward
You now have a multi-agent review loop. This is the quality engine:
- **Example 07** delivers the reviewed output to your phone
- **Example 10** runs this exact agent sequence as steps 2-4 of the full pipeline
- **Example 14** shows you how to customize these agents for your own work
The researcher-writer-reviewer pattern is the single most reusable workflow in this repo. Any task that involves gathering information, drafting content, and checking quality follows this shape.
---
## The Cumulative Path
> If you ran Examples 01-05, you have a research report with verified data
> and persistent state. This prompt runs the full agent review cycle on it.
```
Read all files in pipeline-output/research-report/.
Use the researcher agent to verify any claims that have not been
web-verified yet and fill any gaps in the data.
Then use the writer agent to produce a polished 400-word summary
suitable for sharing with a colleague. Clear language, no jargon,
structured with headings.
Finally, use the reviewer agent to check for accuracy, clarity,
and completeness. If the reviewer finds issues, have the writer
fix them before showing me the final version.
Save the final version to pipeline-output/research-report/final-summary.md
```
After running this, your research has been through a professional review cycle. The `final-summary.md` is something you could email to your team.
---
## Now Try It Yourself
Replace the demo topic with something you need researched and reviewed:
```
Use the researcher agent to find information about [your topic].
Then use the writer agent to draft a [length]-word [format] for
[your audience]. Use the reviewer agent to check [what matters
most: accuracy, tone, completeness]. Loop until the reviewer
approves.
```
**The pattern you just learned:** specialized agents + sequential handoff + revision loop. Break any complex task into roles (research, draft, review) and let agents handle each part.
Ideas worth trying:
- Research a vendor and produce a one-page recommendation memo
- Gather competitive intelligence and write an executive briefing
- Compile technical documentation and review it for accuracy

View file

@ -6,6 +6,8 @@ Native Telegram support arrived in v2.1.80. Other channels use MCP servers.
**OpenClaw equivalent:** 15+ native channels (WhatsApp, Telegram, Discord, Slack,
Signal, iMessage, IRC, Matrix, Teams, and more).
> **Building on Example 06.** You have a polished, reviewed document from the multi-agent cycle. This example shows how to deliver that result to your phone. A pipeline that produces output nobody sees is a pipeline that does not matter.
---
## Architecture Difference
@ -81,3 +83,56 @@ Both confirm delivery in the terminal output.
If you need 15 channels working out of the box, OpenClaw wins today. Claude Code
has Telegram natively and the rest via MCP. The gap is narrowing with each
release. For most personal automation needs, Telegram is sufficient.
---
## Carry Forward
You now have a delivery channel. Combined with what came before:
- **Example 08** schedules the pipeline so results arrive automatically
- **Example 10** produces pipeline output that you can deliver via messaging
- **Example 12** expands this into full remote access from your phone
Messaging turns Claude Code from "a tool at my desk" into "an assistant I can reach from anywhere."
---
## The Cumulative Path
> If you ran Example 06, you have `pipeline-output/research-report/final-summary.md`.
> This prompt delivers it to your phone.
**Telegram:**
```
Read pipeline-output/research-report/final-summary.md. Send the first
paragraph as a Telegram message with the note: "Full report saved to
pipeline-output/research-report/. Run /read-report to see the rest."
```
**Slack:**
```
Read pipeline-output/research-report/final-summary.md. Post the first
paragraph to #[your-channel] with a thread reply containing the full
summary. Use the /send-slack-message skill.
```
After running this, your pipeline has end-to-end delivery. Research goes from web to your phone in one flow.
---
## Now Try It Yourself
Set up the channel that fits your workflow:
```
Send a [Telegram/Slack/Discord] message to [destination] with:
"[summary of what your pipeline produced]"
```
**The pattern you just learned:** read output + format for channel + deliver. Any pipeline step can send a notification. Typical triggers: pipeline completed, error occurred, daily summary ready.
Ideas worth trying:
- Morning briefing delivered to your phone at 07:00
- Slack notification when a scheduled report finishes
- Error alerts when a cron job fails

View file

@ -6,6 +6,8 @@ system-level cron jobs.
**OpenClaw equivalent:** HEARTBEAT.md, cron scheduler, webhooks, auto-reply.
> **Building on Examples 01-07.** You have a complete pipeline: research (01-03), memory (05), agent review (06), and delivery (07). This example makes it run without you pressing enter. Automation is what separates a demo from a daily driver.
---
## Three Approaches
@ -65,3 +67,67 @@ OpenClaw uses HEARTBEAT.md as a persistent loop marker. Claude Code achieves
the same outcome with more granularity: /loop for in-process polling, CronCreate
for system scheduling, /schedule for remote triggering. The `automation/` directory
in this repo provides macOS launchd examples as an alternative to cron.
---
## Carry Forward
With scheduling configured, your pipeline runs itself:
- **Example 09** adds safety guardrails to protect automated runs
- **Example 10** is the pipeline that your cron job triggers
- **Example 12** lets you monitor and control scheduled runs from your phone
Automation is the multiplier. Everything you built in examples 01-07 runs once when you trigger it. With scheduling, it runs every day, every week, or every time a condition is met.
---
## The Cumulative Path
> If you ran Examples 01-07, you have a complete pipeline with delivery.
> This prompt schedules it to run automatically.
```
Create a cron job that runs every Monday at 07:00 to execute:
1. Web research on a topic from memory/research-state.md
2. Multi-agent review via the researcher/writer/reviewer agents
3. Save output to pipeline-output/
4. Send a Telegram notification with the summary
Use automation/daily-briefing.sh as a template for the wrapper script.
Show me the cron entry before creating it so I can verify.
```
After running this, your Monday mornings start with a fresh research report waiting on your phone.
---
## Now Try It Yourself
Think about what you would automate if it cost zero effort:
**Option 1: /loop (for testing)**
```
/loop interval=300
Every 5 minutes, check if [your trigger condition]. If true,
[execute your pipeline]. Write results to [your output location].
```
**Option 2: CronCreate (for production)**
```
Create a cron job that runs [your task] every [schedule].
Use [wrapper script] and show me the cron entry first.
```
**Option 3: /schedule (for remote)**
```
/schedule "[your task description]" at [ISO datetime]
```
**The pattern you just learned:** trigger condition + pipeline steps + output destination. The three approaches cover different needs: /loop for development, CronCreate for system-level, /schedule for remote.
Ideas worth trying:
- Weekly competitor monitoring report every Monday
- Daily inbox summary from your email MCP at 08:00
- Hourly check for new entries in a data source you monitor

View file

@ -5,6 +5,8 @@ PreToolUse hooks can block dangerous operations. PostToolUse hooks create audit
**OpenClaw equivalent:** Docker sandbox, exec approvals, tool deny lists, allowlists.
> **Building on Example 08.** You have an automated pipeline that runs on a schedule. But automation without safety guardrails is a liability. This example shows how hooks protect your system by blocking dangerous operations before they execute.
---
## How the Hooks Work
@ -90,3 +92,54 @@ hooks: PreToolUse intercepts at the call level, before any syscall happens.
For personal use, hooks are more flexible. You write exactly the rules you need.
For untrusted third-party agents, Docker isolation is stronger. See
`security/nemoclaw-comparison.md` for a full breakdown.
---
## Carry Forward
Security hooks run silently on every tool call, including in automated pipelines:
- **Example 10** runs the full pipeline with hooks active on every step
- **Example 13** (Auto Mode) adds an AI classifier on top of hooks for layered safety
The hooks in this repo are starter examples. Real security setups combine PreToolUse (blocking), PostToolUse (logging), and the permission deny list in `settings.json` for defense in depth.
---
## The Cumulative Path
> If you ran Examples 01-08, you have an automated research pipeline.
> This prompt proves that your accumulated work is protected.
```
Try running: rm -rf pipeline-output/research-report/
Before running it, explain what the PreToolUse hook will do to protect
the research report I have been building through examples 01-08.
After the attempt, check hooks/audit.log and show me the last 5 entries.
Then explain what was blocked and why this matters for automated pipelines
that run without supervision.
```
This is a satisfying test. You see the system protect work you built across nine examples.
---
## Now Try It Yourself
Think about what commands your automated pipeline should never run:
```
Try running: [a command that should be blocked in your context]
Before running it, explain what you expect the PreToolUse hook to do.
After the attempt, check hooks/audit.log and explain what was blocked.
```
**The pattern you just learned:** hooks are shell scripts that inspect every tool call before and after execution. Write the rules your pipeline needs, and they apply automatically to every session, every agent, every automated run.
Ideas worth trying:
- Block commands that write to production directories
- Log every file write for audit compliance
- Block network requests to domains outside an allowlist

View file

@ -5,6 +5,8 @@ file I/O, memory, hooks, and logging in a single Claude Code run.
**OpenClaw equivalent:** End-to-end agent pipeline with messaging, skills, and hooks.
> **Building on Examples 01-09.** This is the culmination. Every capability you explored individually now runs as a single, automated pipeline. If you followed the Cumulative Path, you already built the pieces. This example connects them.
---
## What This Demonstrates
@ -109,3 +111,64 @@ runs the article production pipeline at `fromaitochitta.com`.
The companion repo you are reading is the minimal version of that setup.
Clone it, open Claude Code, and run this prompt to see the full stack work.
---
## Carry Forward
You now have a working end-to-end pipeline. From here:
- **Examples 11-13** add advanced capabilities (desktop control, remote access, autonomous mode) that can extend this pipeline
- **Example 14** takes everything you learned and builds your own personalized agent
The pipeline pattern, research + draft + review + save + notify, works for any domain. Change the topic, swap the agents, adjust the output format. The architecture stays the same.
---
## The Cumulative Path
> If you followed the Cumulative Path through Examples 01-09, you already
> have most of the pipeline output. This prompt runs the complete flow from
> scratch on a new topic, proving the pipeline works end-to-end.
```
Run a full research-to-output pipeline on the topic: "What can Claude Code
do that I did not know about before running these examples?"
Pipeline steps:
1. Read memory/research-state.md to understand what has been explored
2. Use the researcher agent to search for capabilities not yet covered
3. Use the writer agent to draft a 400-word personal discovery summary
4. Use the reviewer agent to verify all claims
5. Save to pipeline-output/personal-discoveries.md
6. Append execution log to memory/pipeline-log.md
7. Show me the first 10 lines of the output file
```
This is a genuinely useful output: a personalized summary of what you learned, produced by the pipeline you just built.
---
## Now Try It Yourself
Replace the topic with something your pipeline should produce regularly:
```
Run a full research-to-output pipeline on: "[your recurring research topic]"
Pipeline steps:
1. [your research source: web, files, APIs]
2. Use the researcher agent to [gather what you need]
3. Use the writer agent to draft a [format] for [audience]
4. Use the reviewer agent to check [what matters most]
5. Save to pipeline-output/[your-output].md
6. Log to memory/pipeline-log.md
7. Show me the first 10 lines
```
**The pattern you just learned:** the full pipeline is a recipe with interchangeable ingredients. Swap the research topic, change the output format, adjust the audience. The agent orchestration, file I/O, memory, and logging stay the same.
Ideas worth trying:
- Weekly industry briefing for your team
- Automated due diligence report for vendors or partners
- Content pipeline that drafts, reviews, and delivers blog posts

View file

@ -52,3 +52,23 @@ works with any application, not just browsers.
For browser-only automation, Playwright MCP (example 04) is
faster and more reliable. Computer Use shines when you need to
interact with native desktop apps that have no API or CLI.
---
## Now Try It Yourself
Think about a desktop task you repeat that involves no API or CLI:
```
Open [application], navigate to [where], and [do what].
Take a screenshot before and after. Save the result as
[filename] on the Desktop.
```
**The pattern you just learned:** application + navigation + action + capture. Computer Use works with anything visible on screen. The golden rule: if you can do it by clicking, Claude can do it too.
Ideas worth trying:
- Open your email client, screenshot the inbox, and list unread subjects
- Fill out a form in a desktop app with data from a file
- Take a screenshot of a dashboard and summarize the metrics
- Open a PDF, extract key sections, and save them as markdown

View file

@ -113,3 +113,31 @@ running. Workarounds: tmux, Mac Mini, VPS.
For most "text my agent from my phone" use cases, Channels
or Dispatch gets the job done. The gap is the always-on
daemon, not the phone access itself.
---
## Now Try It Yourself
Pick the method that fits how you work:
**If you want "text my agent" (Channels):**
```
[From your phone via iMessage/Telegram/Discord]
Run /weekly-status and send me the summary.
```
**If you want "fire and forget" (Dispatch):**
Open Claude Desktop > Cowork > Dispatch. Send a task from your phone.
**If you want "take the wheel" (Remote Control):**
```
/rc
```
Scan the QR code from your phone. You now have full control.
**The pattern you just learned:** three access methods for three use cases. Channels for reactive messaging, Dispatch for delegated tasks, Remote Control for interactive steering. Most people start with one and add others as needed.
Ideas worth trying:
- Text "morning briefing" from bed and get a summary before coffee
- Dispatch "prepare the meeting notes for today" while commuting
- Use /rc to guide Claude through a debugging session from your couch

View file

@ -81,3 +81,29 @@ Different philosophy:
Both have trade-offs. Sandboxes catch unknown threats. Classifiers
prevent the action from happening at all but may miss novel attacks
(5.7% false negative rate).
---
## Now Try It Yourself
Auto Mode works best for well-defined, bounded tasks. Try it on something safe first:
```
[With Auto Mode enabled]
Read all markdown files in this project. Create a table of contents
in pipeline-output/index.md listing every file with a one-line
description. Verify the file count matches.
```
**The pattern you just learned:** Auto Mode removes the permission prompts. Use it when you trust the task scope and want Claude to work without interruption. Start with read-heavy, write-light tasks and expand as you build confidence.
When to use Auto Mode:
- Research tasks where you trust the search and write scope
- File organization within a known project
- Running a tested pipeline end-to-end without babysitting
- Batch processing files with a predictable pattern
When NOT to use Auto Mode:
- First time running an untested pipeline
- Tasks that touch production systems or external APIs
- Anything involving credentials, payments, or irreversible actions

View file

@ -0,0 +1,276 @@
# Example 14: Build Your Personal Agent
This is not a demo. This is the example where you build something real.
Everything you explored in examples 01-13 demonstrated capabilities. This
example puts them together into a personal agent that works for you
specifically. Not a copy of the demo. Not a tutorial exercise. A setup
you will actually use tomorrow morning.
**Time needed:** 45-60 minutes for the core setup. You will refine it
over the first week of use.
---
## What you will build
By the end of this example, you will have:
1. A `CLAUDE.md` written for your life and work (not the demo one)
2. A personal skill that automates something you do every week
3. A messaging channel connected to your phone
4. A scheduled automation that runs without you
5. A tested end-to-end flow: phone to agent to result to phone
This is the setup that makes people text their agent from bed and
find the answer waiting when they sit down with coffee.
---
## Step 1: Write your CLAUDE.md (15 minutes)
Close this repo's `CLAUDE.md` in your mind. Start fresh. Open a new
project directory (or use an existing one) and create `CLAUDE.md`.
Write it like a briefing for a brilliant new colleague on their
first day. Include:
```markdown
# [Your Project or Life Context]
## Who I am
[Your role, what you do day to day, what you care about]
## How I work
[Communication preferences, formats you like, what annoys you]
[Example: "Be direct. Skip caveats. Bullet points over paragraphs."]
## What I am working on right now
[Your top 3-5 priorities with deadlines if they exist]
## What Claude should never do
[Hard boundaries. Things that would break trust.]
[Example: "Never send anything externally without my explicit OK"]
## Tools and accounts
[What MCP servers are configured, what services you use]
[Example: "Slack MCP connected to workspace X. Telegram channel active."]
```
### How to know it is good enough
Read it back and ask: if a capable person read only this file, could
they handle my Monday morning? If the answer is "mostly, yes," it is
good enough. You will improve it every week.
**Pattern from Example 05:** This file is loaded at every session start.
Everything you write here shapes every interaction.
---
## Step 2: Write your first real skill (15 minutes)
Not the demo skill. Not a copy. A skill that solves a problem you
have this week.
Think about what you do repeatedly that follows a pattern:
- A report you write every Monday
- Research you do before every meeting
- A summary you prepare for your manager
- A check-in you run on a project
Create `.claude/skills/[your-skill-name].md`:
```markdown
---
name: [your-skill-name]
description: [one sentence that explains when to use this]
---
# [What This Does]
[Clear instructions for Claude. Be specific about:]
## Steps
1. [Where to get the input: files, web, memory]
2. [What to do with it: research, draft, analyze, compare]
3. [How to format the output: bullets, paragraphs, table]
4. [Where to save it: file path, message channel, both]
## Quality criteria
- [What makes this output good vs. mediocre]
- [What to never include or assume]
## Output format
[Exact structure you want every time]
```
### How to know the skill works
Run it: `/[your-skill-name]`
Does the output match what you would have produced manually? If yes,
you just automated a recurring task. If not, refine the instructions
and run it again. Most skills take 2-3 iterations to get right.
**Pattern from Example 06:** If the skill involves research and writing,
consider using the researcher/writer/reviewer agent pattern inside it.
Multi-agent review catches errors that a single pass misses.
---
## Step 3: Connect your phone (10 minutes)
Pick one channel. You can always add more later.
**Telegram** (works on any phone, recommended for first setup):
```bash
claude --channels
```
Follow the Telegram setup in `messaging/telegram-channels-setup.md`.
**iMessage** (if you live in Apple's ecosystem):
```bash
/install @anthropic-ai/claude-code-imessage
claude --channels
```
Follow `messaging/imessage-setup.md`.
### Test it
From your phone, send: "Run /[your-skill-name]"
If the result comes back to your phone, your agent is connected.
**Pattern from Example 07:** Channels turn a desktop tool into
a personal assistant you can reach from anywhere.
---
## Step 4: Automate it (10 minutes)
Your skill works. Your phone is connected. Now make it run without you.
**For a daily task:**
```
Create a cron job that runs /[your-skill-name] every [weekday] at [time].
Use automation/daily-briefing.sh as a template. Show me the cron entry
before creating it.
```
**For a weekly task:**
```
/schedule "Run /[your-skill-name] and send me the result via Telegram"
at [next Monday]T07:00:00
```
### Test it
Create a `/loop` test first to verify the flow works before committing
to a cron job:
```
/loop interval=120
Run /[your-skill-name]. Send the result to [your channel].
Then wait for the next interval.
```
If the output arrives on your phone every 2 minutes, the automation works.
Replace with a real schedule.
**Pattern from Example 08:** /loop for testing, CronCreate for daily
drivers, /schedule for remote triggers.
---
## Step 5: Test the full flow (5 minutes)
The real test. Put your phone down. Walk away from the computer.
From your phone, send:
```
Run /[your-skill-name] and tell me when it is done.
```
Wait.
When the result arrives, you have a working personal agent. Not a demo.
Not an exercise. A system that does real work on your behalf, triggered
from your phone, using your context, delivering to where you need it.
---
## What you have now
| Component | What it does | File |
|-----------|-------------|------|
| CLAUDE.md | Your context, always loaded | `CLAUDE.md` |
| Skill | Your recurring task, automated | `.claude/skills/[name].md` |
| Channel | Your phone connection | Telegram/iMessage/Discord |
| Schedule | Your automation trigger | cron or /schedule |
| Memory | Your persistent state | `memory/MEMORY.md` |
| Hooks | Your safety guardrails | `hooks/` |
This is the same architecture, at a smaller scale, that runs
production content pipelines. The pieces are the same. The
difference is what you point them at.
---
## What to do in your first week
**Day 1-2:** Use the skill manually a few times. Notice what the output
gets wrong or could improve. Edit the skill file.
**Day 3-4:** Add a second skill for something else you do often. Start
texting tasks from your phone as a habit.
**Day 5-7:** Check your CLAUDE.md. Is it still accurate? Add what you
learned this week. Remove what turned out to be irrelevant.
**After one week:** You will know whether this is a novelty or a genuine
tool. Most people who get this far keep going. The ones who do not usually
stopped at the CLAUDE.md and never made it personal enough to be useful.
---
## Growing from here
Your agent gets more useful the more you invest in it:
**Add more skills.** Every task you do more than twice a week is a
candidate. A good personal setup has 3-5 skills after a month.
**Add more tools.** MCP servers connect Claude to your services.
Slack, Google Drive, calendar, databases. Each one extends what
your agent can do autonomously.
**Add more agents.** The `.claude/agents/` directory can hold
specialists for your domain. A "compliance checker," a "meeting prep"
agent, a "customer research" agent. Pattern: give each agent a role,
a scope, and clear instructions.
**Tune the security.** As you automate more, tighten the hooks. Add
patterns to the deny list. Review the audit log weekly. The more
autonomous your agent is, the more important the guardrails.
---
## Honest assessment
This setup will not replace your judgment, your relationships, or
your taste. It replaces the scaffolding: the research, the formatting,
the status updates, the routine decisions, the cognitive overhead of
remembering where you left off.
The people who get the most from it are the ones who are specific about
what they need. A vague CLAUDE.md produces vague results. A precise one
produces surprisingly useful results from day one.
The time investment is real: one hour to set up, five minutes a week
to maintain. The return depends entirely on how well you describe your
work and how consistently you use it.
Start with one skill. Make it genuinely useful. Everything else follows.