Settings.json: 16 scoped Bash grants (was 6 wildcards), 26-pattern deny list (was 5).
CVE mapping: all 9 OpenClaw CVEs mapped to specific defenses with layer documentation.
Scan results: posture Grade D (expected without llm-security), deep scan 0 critical/high.
Hooks README: Option A — document llm-security hooks, recommend plugin installation.
README: evidence-based security section with scan data and verification instructions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Maps the gap between the security assessment article and actual
repo configuration. 6 tasks to make this repo demonstrable proof
that Claude Code handles OpenClaw security challenges.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Data-driven comparison covering 9 CVEs, 10 security categories,
and attack surface analysis. Based on published research from
SecurityScorecard, DigitalOcean, Sangfor, and OpenClaw official docs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Skills, agents, and hooks should be created via plugin-dev plugin,
not generic "tell Claude" or manual file writing.
Example 14: /agent-development for agents, /skill-development for
pipeline skills, /hook-development for hooks, /skill-reviewer for
quality checks, /plugin-validator for setup validation. plugin-dev
added as prerequisite.
GETTING-STARTED.md: /skill-development for skill creation,
/skill-reviewer for iteration, /mcp-integration for MCP servers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Example 14 completely rewritten. Was: GETTING-STARTED.md repeated
(one skill + phone + cron). Now: a 7-phase system design that
produces a complete personal agent ecosystem (custom agents,
multi-agent pipeline, custom hooks, automation, phone delivery).
Requires accumulated knowledge from examples 01-13. Includes:
- Phase 1: Map your work (design before building)
- Phase 3: Custom agent team created via Claude (not manually)
- Phase 4: Pipeline skill chaining agents into complete workflow
- Phase 5: Custom security hooks for user's context
- Phase 7: Test on real work with evaluation rubric
- Three concrete persona examples (marketing, engineering, consulting)
GETTING-STARTED.md Step 4: replaced manual file creation with
"tell Claude to create the skill" workflow. Skills, agents, and
hooks should always be created by asking Claude, not by hand.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add three new sections to all 14 examples:
- "Carry Forward": what output feeds into later examples (01-10)
- "The Cumulative Path": alternative prompt building on previous output (02-10)
- "Now Try It Yourself": personalized template with transferable pattern (all)
- "Building On" callout connecting back to previous examples (02-10)
Add Example 14: Build Your Personal Agent - capstone that guides reader
through writing their own CLAUDE.md, creating a personal skill, connecting
a messaging channel, setting up automation, and testing end-to-end.
Update README with cumulative path diagram, two usage modes, and example 14.
Update GETTING-STARTED.md with cross-references to relevant examples.
17 files changed, 703+ lines added. The examples now form a coherent
learning path from "see what it can do" to "build your own agent."
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Address findings from pedagogical review simulating a non-expert user:
- Add CLAUDE.md to project root (was referenced but missing)
- Fix README score from 12/9/1 to 13/8/1 (match feature-map.md)
- Add Expected Output sections to examples 01, 02, 05, 09, 10
- Create pipeline-output/ and briefings/ directories
- Add example ordering guidance in README
- Add plan requirements for examples 11/13 in prerequisites
- Add skill frontmatter explanation in GETTING-STARTED.md
- Explain Cowork/Dispatch with links in cowork-integration
- Expand .gitignore with node_modules and generated output files
- Add model override hints in agent frontmatter comments
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Actual table verdicts: 13 full match (59%), 8 different (36%), 1 gap (5%).
Summary table was wrong since initial commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Practical 6-step guide that takes users from "clone and explore"
to "personal AI assistant I text from my phone":
1. Personalize CLAUDE.md (30 min)
2. Set up phone channel - iMessage/Telegram/Discord (10 min)
3. Keep session alive with tmux (5 min)
4. Write first personal skills with 3 real examples (15 min)
5. Connect external tools via MCP (15 min per tool)
6. Let it learn over time (ongoing)
Includes: "what your day looks like after setup" scenarios,
honest expectations, and quick reference table.
README updated with two-path entry: demo mode vs daily driver.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- messaging/channels-setup.md: comprehensive guide covering all 3
channels (Telegram, Discord, iMessage), bun install, plugin setup,
session persistence workarounds, and enterprise admin settings
- messaging/imessage-setup.md: macOS-specific iMessage setup with
Full Disk Access, /imessage access allow, and known quirks
- messaging/README.md: rewritten with 3-way distinction table
(Channels=event-based, Dispatch=message-based, RC=direct control)
- examples/12: expanded from 2 options to 3 with clear trigger model
- feature-map.md: row 9 updated to include iMessage
- README.md: session persistence warning added as top-level section
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Major update based on Anthropic's March 24, 2026 releases:
- feature-map.md: expanded from 20 to 22 capabilities, gaps reduced
from 2 to 1 (only Canvas/A2UI remains)
- examples/11-computer-use: desktop control via screenshots and clicks
- examples/12-remote-control: /rc and Dispatch for phone control
- examples/13-auto-mode: AI safety classifier for autonomous execution
- cowork-integration/: how Code + Cowork + Dispatch together replicate
OpenClaw's full feature set
- security/auto-mode-explained.md: deep-dive on the new permission mode
- Updated README with broader ecosystem table and revised scores
Score: 12 full match (55%), 9 different approach (41%), 1 gap (4%)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>