--- name: security:clean description: Scan and remediate security findings — auto-fixes deterministic issues, confirms semi-auto with user, reports manual findings allowed-tools: Read, Glob, Grep, Bash, Write, Edit, Agent, AskUserQuestion model: sonnet --- # /security clean [path] [--dry-run] Scan, classify findings by remediability, auto-fix deterministic issues, propose semi-auto fixes, report manual. Goal: `/security scan` yields zero findings after clean. ## Step 1: Setup - Parse `$ARGUMENTS`: extract path (default `.`), `--dry-run` flag. Resolve to absolute. - Plugin root = parent of this `commands/` folder. - Unless dry-run: create backup via `node /scanners/lib/fs-utils.mjs backup ""`. Record backup path. ## Step 2: Pre-Clean Scan ```bash node /scanners/lib/fs-utils.mjs tmppath clean-findings.json node /scanners/scan-orchestrator.mjs "" --output-file "" ``` Show banner: Verdict, Risk Score, Finding counts. If 0 findings → stop. ## Step 3: Auto-Fix ```bash node /scanners/auto-cleaner.mjs "" --findings "" [--dry-run] ``` Report: Applied/Skipped/Failed counts + list of fixes. ## Step 4: Semi-Auto Proposals Collect `semi_auto` findings from auto-cleaner output. If any, spawn `subagent_type: "llm-security:cleaner-agent"`, `model: "sonnet"`: > Here are semi-auto findings: \. Target: \. > Read: \/knowledge/secrets-patterns.md > Return remediation proposals as JSON. Present each proposal group via AskUserQuestion: "Apply all" / "Review individually" / "Skip". Apply approved fixes with Edit tool. Skip if dry-run. ## Step 5: LLM Threat Scan Spawn `subagent_type: "llm-security:skill-scanner-agent"`, `model: "sonnet"`: > Scan target: \. Read: \/knowledge/skill-threat-patterns.md, \/knowledge/secrets-patterns.md > Return findings with severity, category, file, line, remediation. Auto-fix deterministic LLM findings (injection comments, spoofed headers, exfil URLs). Present semi-auto via AskUserQuestion. Report manual findings. ## Step 6: Validate + Re-Scan Validate modified files (JSON parse, frontmatter, `node --check`). Restore from backup on failure. Re-run orchestrator to measure improvement. ## Step 7: Report Output: Pre/post comparison, all fix summaries, remaining manual findings, rollback instructions. - Dry-run: show "DRY-RUN" mode, list proposed changes without applying.