# Example 13: Auto Mode Let Claude Code run autonomously with an AI safety classifier reviewing every action. No manual approvals needed. This is the feature that makes Claude Code feel like OpenClaw's daemon mode. **OpenClaw equivalent:** Default autonomous mode with Docker sandbox + exec approvals for dangerous commands. **Requirements:** - Claude Code v2.1.86+ - Team plan or higher (research preview) ## Enabling Auto Mode From the CLI: ```bash claude --enable-auto-mode ``` In an active session, press `Shift+Tab` to cycle through permission modes until you reach Auto Mode. ## The prompt ``` Clone the repository at https://github.com/example/sample-app, install dependencies, run the test suite, fix any failing tests, and create a summary of what you changed in CHANGES.md. ``` ## What happens 1. Claude Code clones the repo (no permission prompt) 2. Runs `npm install` (no permission prompt) 3. Runs `npm test` (no permission prompt) 4. Reads failing test output, edits source files (no prompt) 5. Re-runs tests until they pass (no prompt) 6. Writes CHANGES.md (no prompt) Every action is reviewed by the safety classifier (Sonnet 4.6) before execution. If an action is flagged as risky (e.g., mass file deletion, data exfiltration), it is blocked and Claude is redirected to take a different approach. ## How the safety classifier works Two-layer system: 1. **Fast filter:** Quick yes/no on the action category 2. **Chain-of-thought:** Detailed reasoning for borderline cases Performance (Anthropic's internal testing): - 0.4% false positive rate (safe actions incorrectly blocked) - 5.7% false negative rate (risky actions not caught) The classifier runs on Sonnet 4.6 regardless of your session model. ## Permission mode comparison | Mode | Approvals | Safety | Use case | |------|----------|--------|----------| | Default | Every action | Maximum | Learning, sensitive projects | | Auto-edit | Pre-approved patterns | High | Known workflows | | Auto Mode | AI classifier | High | Autonomous execution | | Bypass | None | Minimal | Sandboxed environments only | ## How this compares to OpenClaw OpenClaw runs autonomously by default. Safety comes from Docker sandboxing (container limits what the agent can do even if it tries something dangerous). Claude Code Auto Mode runs autonomously with an AI classifier reviewing each action before execution. Safety comes from pre-execution screening, not post-execution containment. Different philosophy: - **OpenClaw:** "Let it try, contain the damage" (sandbox) - **Claude Code:** "Review before executing" (classifier) Both have trade-offs. Sandboxes catch unknown threats. Classifiers prevent the action from happening at all but may miss novel attacks (5.7% false negative rate).