Major update based on Anthropic's March 24, 2026 releases: - feature-map.md: expanded from 20 to 22 capabilities, gaps reduced from 2 to 1 (only Canvas/A2UI remains) - examples/11-computer-use: desktop control via screenshots and clicks - examples/12-remote-control: /rc and Dispatch for phone control - examples/13-auto-mode: AI safety classifier for autonomous execution - cowork-integration/: how Code + Cowork + Dispatch together replicate OpenClaw's full feature set - security/auto-mode-explained.md: deep-dive on the new permission mode - Updated README with broader ecosystem table and revised scores Score: 12 full match (55%), 9 different approach (41%), 1 gap (4%) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
83 lines
2.7 KiB
Markdown
83 lines
2.7 KiB
Markdown
# Example 13: Auto Mode
|
|
|
|
Let Claude Code run autonomously with an AI safety classifier
|
|
reviewing every action. No manual approvals needed. This is the
|
|
feature that makes Claude Code feel like OpenClaw's daemon mode.
|
|
|
|
**OpenClaw equivalent:** Default autonomous mode with Docker
|
|
sandbox + exec approvals for dangerous commands.
|
|
|
|
**Requirements:**
|
|
- Claude Code v2.1.86+
|
|
- Team plan or higher (research preview)
|
|
|
|
## Enabling Auto Mode
|
|
|
|
From the CLI:
|
|
```bash
|
|
claude --enable-auto-mode
|
|
```
|
|
|
|
In an active session, press `Shift+Tab` to cycle through
|
|
permission modes until you reach Auto Mode.
|
|
|
|
## The prompt
|
|
|
|
```
|
|
Clone the repository at https://github.com/example/sample-app,
|
|
install dependencies, run the test suite, fix any failing tests,
|
|
and create a summary of what you changed in CHANGES.md.
|
|
```
|
|
|
|
## What happens
|
|
|
|
1. Claude Code clones the repo (no permission prompt)
|
|
2. Runs `npm install` (no permission prompt)
|
|
3. Runs `npm test` (no permission prompt)
|
|
4. Reads failing test output, edits source files (no prompt)
|
|
5. Re-runs tests until they pass (no prompt)
|
|
6. Writes CHANGES.md (no prompt)
|
|
|
|
Every action is reviewed by the safety classifier (Sonnet 4.6)
|
|
before execution. If an action is flagged as risky (e.g., mass
|
|
file deletion, data exfiltration), it is blocked and Claude is
|
|
redirected to take a different approach.
|
|
|
|
## How the safety classifier works
|
|
|
|
Two-layer system:
|
|
1. **Fast filter:** Quick yes/no on the action category
|
|
2. **Chain-of-thought:** Detailed reasoning for borderline cases
|
|
|
|
Performance (Anthropic's internal testing):
|
|
- 0.4% false positive rate (safe actions incorrectly blocked)
|
|
- 5.7% false negative rate (risky actions not caught)
|
|
|
|
The classifier runs on Sonnet 4.6 regardless of your session model.
|
|
|
|
## Permission mode comparison
|
|
|
|
| Mode | Approvals | Safety | Use case |
|
|
|------|----------|--------|----------|
|
|
| Default | Every action | Maximum | Learning, sensitive projects |
|
|
| Auto-edit | Pre-approved patterns | High | Known workflows |
|
|
| Auto Mode | AI classifier | High | Autonomous execution |
|
|
| Bypass | None | Minimal | Sandboxed environments only |
|
|
|
|
## How this compares to OpenClaw
|
|
|
|
OpenClaw runs autonomously by default. Safety comes from Docker
|
|
sandboxing (container limits what the agent can do even if it
|
|
tries something dangerous).
|
|
|
|
Claude Code Auto Mode runs autonomously with an AI classifier
|
|
reviewing each action before execution. Safety comes from
|
|
pre-execution screening, not post-execution containment.
|
|
|
|
Different philosophy:
|
|
- **OpenClaw:** "Let it try, contain the damage" (sandbox)
|
|
- **Claude Code:** "Review before executing" (classifier)
|
|
|
|
Both have trade-offs. Sandboxes catch unknown threats. Classifiers
|
|
prevent the action from happening at all but may miss novel attacks
|
|
(5.7% false negative rate).
|