feat: initial open marketplace with llm-security, config-audit, ultraplan-local
This commit is contained in:
commit
f93d6abdae
380 changed files with 65935 additions and 0 deletions
29
.claude-plugin/marketplace.json
Normal file
29
.claude-plugin/marketplace.json
Normal file
|
|
@ -0,0 +1,29 @@
|
||||||
|
{
|
||||||
|
"$schema": "https://anthropic.com/claude-code/marketplace.schema.json",
|
||||||
|
"name": "ktg-plugin-marketplace",
|
||||||
|
"owner": {
|
||||||
|
"name": "Kjell Tore Guttormsen",
|
||||||
|
"email": "ktg@fromaitochitta.com"
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"description": "Open-source Claude Code plugins for AI-assisted development, security, and planning",
|
||||||
|
"version": "1.0.0"
|
||||||
|
},
|
||||||
|
"plugins": [
|
||||||
|
{
|
||||||
|
"name": "llm-security",
|
||||||
|
"source": "./plugins/llm-security",
|
||||||
|
"description": "Security scanning, auditing, and threat modeling for Claude Code projects. OWASP LLM Top 10 (2025) and Agentic AI Top 10."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "config-audit",
|
||||||
|
"source": "./plugins/config-audit",
|
||||||
|
"description": "Multi-agent workflow for analyzing, reporting, and optimizing Claude Code configuration across your entire machine"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "ultraplan-local",
|
||||||
|
"source": "./plugins/ultraplan-local",
|
||||||
|
"description": "Deep implementation planning with interview, specialized agent swarms, external research, adversarial review, session decomposition, and headless execution support"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
2
.gitleaksignore
Normal file
2
.gitleaksignore
Normal file
|
|
@ -0,0 +1,2 @@
|
||||||
|
# False positive: intentionally fake credential in llm-security malicious-skill demo
|
||||||
|
plugins/llm-security/examples/malicious-skill-demo/evil-project-health/lib/telemetry.mjs:generic-api-key:18
|
||||||
54
README.md
Normal file
54
README.md
Normal file
|
|
@ -0,0 +1,54 @@
|
||||||
|
# ktg-plugin-marketplace
|
||||||
|
|
||||||
|
Open-source Claude Code plugins for AI-assisted development, security, and planning.
|
||||||
|
|
||||||
|
## Plugins
|
||||||
|
|
||||||
|
| Plugin | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| **llm-security** | Security scanning, auditing, and threat modeling aligned to OWASP LLM Top 10 (2025) |
|
||||||
|
| **config-audit** | Multi-agent workflow for analyzing and optimizing Claude Code configuration |
|
||||||
|
| **ultraplan-local** | Deep implementation planning with agent swarms, adversarial review, and headless execution |
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### Step 1: Add this marketplace
|
||||||
|
|
||||||
|
Add the following entry to your `~/.claude/plugins/known_marketplaces.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ktg-plugin-marketplace": {
|
||||||
|
"source": {
|
||||||
|
"source": "git",
|
||||||
|
"url": "https://git.fromaitochitta.com/open/ktg-plugin-marketplace.git"
|
||||||
|
},
|
||||||
|
"installLocation": "<your-home>/.claude/plugins/marketplaces/ktg-plugin-marketplace",
|
||||||
|
"autoUpdate": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `<your-home>` with your actual home directory path.
|
||||||
|
|
||||||
|
### Step 2: Enable plugins
|
||||||
|
|
||||||
|
Add the plugins you want to `~/.claude/settings.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"enabledPlugins": {
|
||||||
|
"llm-security@ktg-plugin-marketplace": true,
|
||||||
|
"config-audit@ktg-plugin-marketplace": true,
|
||||||
|
"ultraplan-local@ktg-plugin-marketplace": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Verify
|
||||||
|
|
||||||
|
Open a new Claude Code session and run `/plugin` to see available plugins.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
8
plugins/config-audit/.claude-plugin/plugin.json
Normal file
8
plugins/config-audit/.claude-plugin/plugin.json
Normal file
|
|
@ -0,0 +1,8 @@
|
||||||
|
{
|
||||||
|
"name": "config-audit",
|
||||||
|
"description": "Multi-agent workflow for analyzing, reporting, and optimizing Claude Code configuration across your entire machine",
|
||||||
|
"version": "3.0.1",
|
||||||
|
"author": {
|
||||||
|
"name": "Kjell Tore Guttormsen"
|
||||||
|
}
|
||||||
|
}
|
||||||
27
plugins/config-audit/.claude/rules/agent-development.md
Normal file
27
plugins/config-audit/.claude/rules/agent-development.md
Normal file
|
|
@ -0,0 +1,27 @@
|
||||||
|
---
|
||||||
|
paths: agents/**/*.md
|
||||||
|
---
|
||||||
|
|
||||||
|
# Agent Development Rules
|
||||||
|
|
||||||
|
## Required Frontmatter
|
||||||
|
|
||||||
|
All agent files MUST include this frontmatter:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
name: descriptive-name
|
||||||
|
description: |
|
||||||
|
Multi-line description of when to use this agent.
|
||||||
|
model: opus|sonnet|haiku
|
||||||
|
color: blue|green|yellow|purple|cyan|magenta
|
||||||
|
tools: ["Read", "Glob", "Write"]
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conventions
|
||||||
|
|
||||||
|
- Agent names use kebab-case with `-agent` suffix
|
||||||
|
- Description must explain WHEN the agent should be used
|
||||||
|
- Model choice: opus for analysis, sonnet for implementation, haiku for scanning
|
||||||
|
- Color must be unique within the plugin
|
||||||
24
plugins/config-audit/.claude/rules/command-development.md
Normal file
24
plugins/config-audit/.claude/rules/command-development.md
Normal file
|
|
@ -0,0 +1,24 @@
|
||||||
|
---
|
||||||
|
paths: commands/**/*.md
|
||||||
|
---
|
||||||
|
|
||||||
|
# Command Development Rules
|
||||||
|
|
||||||
|
## Required Frontmatter
|
||||||
|
|
||||||
|
All command files MUST include:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
name: plugin:command
|
||||||
|
description: Short description of what this command does
|
||||||
|
allowed-tools: Read, Write, Bash, Task
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Naming Convention
|
||||||
|
|
||||||
|
- Commands use `plugin-name:action` format (e.g., `config-audit:analyze`)
|
||||||
|
- Main router command uses just the plugin name (e.g., `config-audit`)
|
||||||
|
- Description should be one line, actionable
|
||||||
15
plugins/config-audit/.claude/rules/state-management.md
Normal file
15
plugins/config-audit/.claude/rules/state-management.md
Normal file
|
|
@ -0,0 +1,15 @@
|
||||||
|
# State Update Rule
|
||||||
|
|
||||||
|
After EVERY phase completes, you MUST update state.yaml using the Write tool (full file overwrite):
|
||||||
|
|
||||||
|
1. Read: `~/.claude/config-audit/sessions/{session-id}/state.yaml`
|
||||||
|
2. Update these fields:
|
||||||
|
- `current_phase`: the phase that just completed
|
||||||
|
- `completed_phases`: add the phase to array
|
||||||
|
- `next_phase`: the next phase in workflow
|
||||||
|
- `updated_at`: current timestamp
|
||||||
|
3. Write the full file back
|
||||||
|
|
||||||
|
**DO NOT output the phase summary until state.yaml is updated.**
|
||||||
|
|
||||||
|
This ensures the workflow can resume correctly if interrupted.
|
||||||
32
plugins/config-audit/.claude/rules/ux-rules.md
Normal file
32
plugins/config-audit/.claude/rules/ux-rules.md
Normal file
|
|
@ -0,0 +1,32 @@
|
||||||
|
# Config-Audit UX Rules
|
||||||
|
|
||||||
|
These rules apply to ALL config-audit commands. The goal is a professional, human-friendly experience.
|
||||||
|
|
||||||
|
## Output Rules
|
||||||
|
|
||||||
|
1. NEVER show raw JSON, stderr output, or scanner progress lines to the user
|
||||||
|
2. ALL scanner Bash commands MUST use `--output-file <path> 2>/dev/null`
|
||||||
|
3. Check exit code via `; echo $?` — codes 0, 1, 2 are normal (PASS/WARNING/FAIL). Only 3 is a real error
|
||||||
|
4. Read output files with the Read tool, extract key metrics, and present formatted results
|
||||||
|
5. NEVER let the user see tool call output that looks like diagnostic logs or stack traces
|
||||||
|
|
||||||
|
## Narration Rules
|
||||||
|
|
||||||
|
1. Before each major step, tell the user what's happening in plain language
|
||||||
|
2. After scanners complete, briefly say what was found before showing details
|
||||||
|
3. When spawning agents, tell the user what the agent does and approximate wait time
|
||||||
|
4. If something takes more than a few seconds, set expectations: "This takes about 30 seconds..."
|
||||||
|
|
||||||
|
## Formatting Rules
|
||||||
|
|
||||||
|
1. Use markdown tables for structured data (area breakdowns, finding lists)
|
||||||
|
2. Add one-sentence plain-language context for grades and scores — don't assume the user knows what "Level 4 Governed" means
|
||||||
|
3. Separate test-fixture/example findings from real findings when showing counts
|
||||||
|
4. End every command with context-sensitive next steps — explain what each command does, not just its name
|
||||||
|
5. Adapt tone to results: A/B grades get encouraging context, D/F grades get empathetic, actionable guidance
|
||||||
|
|
||||||
|
## Command Format
|
||||||
|
|
||||||
|
1. Always use space-separated format in suggestions: `/config-audit plan` (NOT `/config-audit:plan`)
|
||||||
|
2. Never reference commands that don't exist
|
||||||
|
3. When suggesting next steps, explain WHY the user might want each option
|
||||||
16
plugins/config-audit/.config-audit-ignore
Normal file
16
plugins/config-audit/.config-audit-ignore
Normal file
|
|
@ -0,0 +1,16 @@
|
||||||
|
# Config-Audit Self-Audit Suppressions
|
||||||
|
# These findings are expected/intentional when scanning this plugin's own root.
|
||||||
|
|
||||||
|
# Plugin health scanner: yaml-parser can't parse YAML block lists in agent tools field
|
||||||
|
CA-PLH-*
|
||||||
|
|
||||||
|
# Feature gap: plugin intentionally doesn't need all enterprise features
|
||||||
|
CA-GAP-*
|
||||||
|
|
||||||
|
# Rules with always-active scope (state-management.md) — intentional design
|
||||||
|
CA-RUL-003
|
||||||
|
|
||||||
|
# Duplicate hook definitions: expected when examples/ has its own hooks.json
|
||||||
|
CA-CNF-007
|
||||||
|
CA-CNF-008
|
||||||
|
CA-CNF-009
|
||||||
19
plugins/config-audit/.gitignore
vendored
Normal file
19
plugins/config-audit/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,19 @@
|
||||||
|
# Local configuration (contains machine-specific settings)
|
||||||
|
config-audit.local.md
|
||||||
|
*.local.md
|
||||||
|
.claude/settings.local.json
|
||||||
|
|
||||||
|
# Secrets
|
||||||
|
.env
|
||||||
|
*.key
|
||||||
|
*.pem
|
||||||
|
credentials.*
|
||||||
|
|
||||||
|
# Dependencies
|
||||||
|
node_modules/
|
||||||
|
|
||||||
|
# Development prompts
|
||||||
|
S*-PROMPT.md
|
||||||
|
|
||||||
|
# Plugin state (managed by plugin)
|
||||||
|
.config-audit/
|
||||||
262
plugins/config-audit/CHANGELOG.md
Normal file
262
plugins/config-audit/CHANGELOG.md
Normal file
|
|
@ -0,0 +1,262 @@
|
||||||
|
# Changelog
|
||||||
|
|
||||||
|
All notable changes to this project will be documented in this file.
|
||||||
|
|
||||||
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||||
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
|
## [3.0.1] - 2026-04-04
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
Cross-platform fix — scanners, hooks, and lib now work correctly on Windows.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- `file-discovery.mjs`: depth calculation, agent/command/plugin path matching now use `path.sep`
|
||||||
|
- `scan-orchestrator.mjs`: fixture-path filtering now uses `path.sep`
|
||||||
|
- `post-edit-verify.mjs`: rules-dir regex handles both `/` and `\` separators
|
||||||
|
- `auto-backup-config.mjs`: rules-dir detection now uses `path.sep`
|
||||||
|
- `import-resolver.mjs`: circular import display uses `basename()`, `/tmp` fallback replaced with `os.tmpdir()`
|
||||||
|
- `string-utils.mjs`: `normalizePath` trailing separator regex handles both `/` and `\`
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- 4 cross-platform path tests (total 486 tests)
|
||||||
|
|
||||||
|
## [3.0.0] - 2026-04-04
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
Health redesign — configuration health is now quality-only. Feature utilization removed from grades entirely.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- **Health = quality only.** 7 deterministic scanners (CML, SET, HKV, RUL, MCP, IMP, CNF) determine your grade. Feature Coverage is no longer a graded area.
|
||||||
|
- **Feature recommendations are opt-in.** Unused features shown as "opportunities" via `/config-audit feature-gap`, grouped by impact (high/medium/explore), backed by Anthropic docs. No more "Feature Coverage: F" for correct minimal setups.
|
||||||
|
- **Posture output redesigned.** Shows `Health: {grade} ({score}/100)` with 7 quality areas. Removed utilization %, maturity level, segment label.
|
||||||
|
- **Feature-gap is interactive.** Users select recommendations to implement directly — no manual file editing required. Backup created automatically.
|
||||||
|
- **avgScore bug fixed.** Grade letter and displayed score now computed from the same population (quality areas only).
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- `generateHealthScorecard()` in scoring.mjs — quality-only scorecard
|
||||||
|
- `opportunitySummary()` in feature-gap-scanner.mjs — groups findings by impact tier
|
||||||
|
- `opportunityCount` field in posture JSON output
|
||||||
|
- "Official Configuration Guidance" section in knowledge base (Anthropic docs, proven impacts)
|
||||||
|
- 21 new tests (total 482 across 27 test files)
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
- `S2-PROMPT.md` and `V2-ANNOUNCEMENT.md` — v2 development artifacts
|
||||||
|
- Utilization %, maturity level, segment label from posture terminal output and reports
|
||||||
|
- Feature Coverage row from area breakdown tables
|
||||||
|
- "Top Actions" sourced from GAP findings (replaced by opportunities pointer)
|
||||||
|
|
||||||
|
### Backward Compatibility
|
||||||
|
- JSON output preserves all legacy fields (utilization, maturity, segment) for programmatic consumers
|
||||||
|
- Drift baselines unaffected — GAP findings still present in envelopes
|
||||||
|
- All existing exports maintained (calculateUtilization, determineMaturityLevel, etc.)
|
||||||
|
|
||||||
|
## [2.2.0] - 2026-04-04
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
UX quality fix — fixture filtering, session path migration, output polish.
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- Automatic test-fixture filtering in scan-orchestrator: findings from `tests/`, `examples/`, `__tests__/` excluded from grades, stored in `env.fixture_findings`
|
||||||
|
- `--include-fixtures` CLI flag for scan-orchestrator and posture to override filtering
|
||||||
|
- `scan-orchestrator.test.mjs` — 20 new tests for fixture filtering and `isFixturePath`
|
||||||
|
- Legacy session path detection in cleanup command
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Session storage moved from `~/.config-audit/` to `~/.claude/config-audit/` (pathguard compatible)
|
||||||
|
- Self-audit grade: F → A (98) after fixture filtering
|
||||||
|
- Combined scanner + posture into single Bash call in default audit command
|
||||||
|
- Removed "F grade is misleading" disclaimer — grades are now accurate
|
||||||
|
- All CLI banners and envelope metadata updated to v2.2.0
|
||||||
|
- 461 tests (up from 441), 27 test files (up from 26)
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
- Manual fixture counting instruction in `config-audit.md` (orchestrator handles it)
|
||||||
|
- Redundant `isFixtureOrExample` filter in `self-audit.mjs` (promoted to orchestrator)
|
||||||
|
|
||||||
|
## [2.1.0] - 2026-04-03
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
UX redesign — auto-scope detection, zero questions, simplified command surface.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- `/config-audit` now runs full audit automatically (auto-detects scope from git context)
|
||||||
|
- Removed mode selection prompts — scope override via `/config-audit full|repo|home|current`
|
||||||
|
- Simplified from 17 to 15 commands (removed quick, report, watch; added help)
|
||||||
|
- All CLI banners and envelope metadata updated to v2.1.0
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- `/config-audit help` command with categorized command reference
|
||||||
|
- Auto-scope detection from git context (repo vs home vs full-machine)
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
- `/config-audit:quick` (merged into default `/config-audit`)
|
||||||
|
- `/config-audit:report` (merged into analyze output)
|
||||||
|
- `/config-audit:watch` (use `/config-audit drift` instead)
|
||||||
|
|
||||||
|
## [2.0.0] - 2026-04-03 (v2.0 Complete)
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
Complete rewrite from LLM-only prototype to deterministic scanner-backed configuration intelligence.
|
||||||
|
7 development sessions (S1-S7), ~15,000 lines of code, 408+ tests.
|
||||||
|
|
||||||
|
### Highlights
|
||||||
|
- 8 deterministic scanners (CML, SET, HKV, RUL, MCP, IMP, CNF, GAP) + PLH standalone
|
||||||
|
- Feature gap analysis with 25 dimensions across 4 tiers
|
||||||
|
- Auto-fix engine with 9 fix types + backup/rollback
|
||||||
|
- Drift detection with baseline comparison
|
||||||
|
- Suppression engine (.config-audit-ignore)
|
||||||
|
- Self-audit CLI
|
||||||
|
- 17 commands, 6 agents, 4 hooks
|
||||||
|
- 408+ tests (zero external dependencies)
|
||||||
|
|
||||||
|
### Added (S7)
|
||||||
|
- Example projects: `examples/minimal-setup/` and `examples/optimal-setup/`
|
||||||
|
- Demo script: `examples/run-demo.sh`
|
||||||
|
- `.config-audit-ignore` for self-audit suppressions
|
||||||
|
- `V2-ANNOUNCEMENT.md`
|
||||||
|
- `DEPRECATED.md` for capability-auditor skill
|
||||||
|
|
||||||
|
### Fixed (S7)
|
||||||
|
- `hooks.json`: SessionStart and Stop timeout 5ms → 5000ms
|
||||||
|
- `self-audit.mjs`: Suppression now enabled (was hardcoded to `suppress: false`)
|
||||||
|
|
||||||
|
### Changed (S7)
|
||||||
|
- README.md: Complete rewrite for public release
|
||||||
|
- CLAUDE.md: Added Suppressions section
|
||||||
|
- `.gitignore`: Added `node_modules/` and `S*-PROMPT.md`
|
||||||
|
|
||||||
|
## [1.6.0] - 2026-04-03 (v2.0 S6: Unified Reports + Self-Audit + Suppressions)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Report generator** `scanners/lib/report-generator.mjs` — unified markdown reports: generatePostureReport(), generateDriftReport(), generatePluginHealthReport(), generateFullReport()
|
||||||
|
- **Suppression engine** `scanners/lib/suppression.mjs` — `.config-audit-ignore` file support with exact IDs and glob patterns (CA-SET-*), audit trail via `suppressed_findings` in envelope
|
||||||
|
- **Self-audit CLI** `scanners/self-audit.mjs` — runs all scanners + plugin health on this plugin: `node self-audit.mjs [--json] [--fix]`, exit codes 0/1/2
|
||||||
|
- **PostToolUse hook** `post-edit-verify.mjs` — verifies config files after Edit/Write, blocks if new critical/high findings introduced
|
||||||
|
- **New command**: `/config-audit:report` — generate unified report (posture + optional drift/plugin-health)
|
||||||
|
- **Test fixture** `.config-audit-ignore` in fixable-project
|
||||||
|
- 54 new tests (total 408 across 25 test files)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- `scan-orchestrator.mjs`: suppression integration — applies .config-audit-ignore after all scanners run, `--no-suppress` flag to disable
|
||||||
|
- `hooks.json`: added PostToolUse event with post-edit-verify
|
||||||
|
|
||||||
|
## [1.5.0] - 2026-04-03 (v2.0 S5: Drift + Watch + Plugin Health)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Diff engine** `scanners/lib/diff-engine.mjs` — diffEnvelopes() comparing baseline vs current, formatDiffReport() for terminal output
|
||||||
|
- **Baseline manager** `scanners/lib/baseline.mjs` — save/load/list/delete named baselines in ~/.claude/config-audit/baselines/
|
||||||
|
- **Drift CLI** `scanners/drift-cli.mjs` — standalone: `node drift-cli.mjs <path> [--save] [--baseline name] [--json] [--list]`
|
||||||
|
- **Plugin health scanner** `scanners/plugin-health-scanner.mjs` (PLH) — validates plugin structure, frontmatter, cross-plugin conflicts (runs independently, not in scan-orchestrator)
|
||||||
|
- **3 new commands**:
|
||||||
|
- `/config-audit:drift` — compare current config against saved baseline
|
||||||
|
- `/config-audit:watch` — on-demand drift check with baseline monitoring
|
||||||
|
- `/config-audit:plugin-health` — audit plugin structure and cross-plugin coherence
|
||||||
|
- **Test fixtures** `test-plugin/` (valid) and `broken-plugin/` (invalid) for plugin health tests
|
||||||
|
- 48 new tests (total 354 across 21 test files)
|
||||||
|
|
||||||
|
## [1.4.0] - 2026-04-03 (v2.0 S4: Fix + Rollback Action Pillar)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Fix engine** `scanners/fix-engine.mjs` — deterministic auto-fix for 9 fix types:
|
||||||
|
- `json-key-add` (missing $schema), `json-key-remove` (deprecated keys), `json-key-type-fix` (type mismatches, invalid effortLevel), `json-restructure` (hooks array→object, matcher object→string), `frontmatter-rename` (globs→paths), `file-rename` (non-.md→.md)
|
||||||
|
- **Rollback engine** `scanners/rollback-engine.mjs` — listBackups(), restoreBackup(), deleteBackup() with checksum verification
|
||||||
|
- **Fix CLI** `scanners/fix-cli.mjs` — standalone: `node fix-cli.mjs <path> [--apply] [--json] [--global]`, dry-run by default
|
||||||
|
- **Backup lib** `scanners/lib/backup.mjs` — shared backup module with checksums and manifests
|
||||||
|
- **2 new commands**:
|
||||||
|
- `/config-audit:fix` — scan, plan, backup, apply, verify in one flow
|
||||||
|
- `/config-audit:rollback` — list or restore from backups
|
||||||
|
- **PreToolUse hook** `auto-backup-config.mjs` — auto-backup config files before Edit/Write
|
||||||
|
- **Test fixture** `fixable-project/` — fixture with all 9 fixable issue types
|
||||||
|
- 38 new tests (total 306 across 17 test files)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- `file-discovery.mjs`: walkRulesDir now discovers all files (not just .md) for non-.md validation
|
||||||
|
- `backup-before-change.mjs`: refactored to use shared `lib/backup.mjs` (no logic duplication)
|
||||||
|
- hooks.json: added PreToolUse event with auto-backup
|
||||||
|
|
||||||
|
## [1.3.0] - 2026-04-03 (v2.0 S3: Posture + Feature Gap Commands)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Scoring module** `scanners/lib/scoring.mjs` — utilization, maturity (5 levels), segments, area scoring, scorecard generation
|
||||||
|
- **Posture CLI** `scanners/posture.mjs` — standalone Node.js tool: `node posture.mjs <path> [--json] [--global]`
|
||||||
|
- **2 new commands**:
|
||||||
|
- `/config-audit:posture` — quick scorecard with A-F grades, utilization%, maturity level
|
||||||
|
- `/config-audit:feature-gap` — deep gap analysis with prioritized next-best-actions
|
||||||
|
- **feature-gap-agent** — Opus agent for deep analysis, report generation (max 200 lines)
|
||||||
|
- **Knowledge file** `gap-closure-templates.md` — 11 templates with effort/gain estimates
|
||||||
|
- **HTML report template** `templates/feature-gap-report.html` — visual report with progress bars, grade badges
|
||||||
|
- 64 new tests (total 268 across 14 test files)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Tier weighting: T1 gaps count 3x, T2 count 2x, T3/T4 count 1x in utilization score
|
||||||
|
- Maturity is threshold-based: highest level where ALL requirements are met
|
||||||
|
|
||||||
|
## [1.2.0] - 2026-04-03 (v2.0 S2: Advanced Scanners + Knowledge Base)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **4 advanced scanners** (zero external deps):
|
||||||
|
- `mcp-config-validator.mjs` (MCP) — server types, trust levels, env vars, unknown fields
|
||||||
|
- `import-resolver.mjs` (IMP) — broken @imports, circular refs, deep chains, tilde paths
|
||||||
|
- `conflict-detector.mjs` (CNF) — settings conflicts, permission contradictions, hook duplicates
|
||||||
|
- `feature-gap-scanner.mjs` (GAP) — 25 feature gaps across 4 tiers (Foundation/Depth/Advanced/Enterprise)
|
||||||
|
- **Knowledge base** — 5 reference documents: capabilities, best practices, anti-patterns, hook events, feature evolution
|
||||||
|
- **New test fixtures** — `.mcp.json` files, @import chains, `conflict-project/` fixture
|
||||||
|
- 75 new tests (total 204 across 12 test files)
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- Scan orchestrator runs 8 scanners (was 4)
|
||||||
|
- Analyzer agent cross-references scanner findings with knowledge base
|
||||||
|
|
||||||
|
## [1.1.0] - 2026-04-03 (v2.0 S1: Scanner Foundation)
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **Deterministic scanner infrastructure** — 4 Node.js scanners (zero external deps):
|
||||||
|
- `claude-md-linter.mjs` (CML) — CLAUDE.md structure, length, sections, @imports, duplicates
|
||||||
|
- `settings-validator.mjs` (SET) — settings.json schema, unknown/deprecated keys, type checks
|
||||||
|
- `hook-validator.mjs` (HKV) — hooks.json format, script existence, event validity, timeouts
|
||||||
|
- `rules-validator.mjs` (RUL) — .claude/rules/ glob matching, orphan detection, deprecated fields
|
||||||
|
- **Scanner lib** — 5 shared modules: severity, output, file-discovery, yaml-parser, string-utils
|
||||||
|
- **Scan orchestrator** — `scan-orchestrator.mjs` runs all scanners, outputs JSON envelope
|
||||||
|
- **Test infrastructure** — 129 tests across 8 test files using node:test (zero deps)
|
||||||
|
- **Test fixtures** — 4 fixture projects (healthy, broken, empty, minimal)
|
||||||
|
- Finding ID format: `CA-{SCANNER}-{NNN}` (e.g. `CA-CML-001`)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- Agent model mismatches: scanner→haiku, analyzer→sonnet, planner→opus, implementer→sonnet, verifier→haiku
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- CLAUDE.md rewritten in English for public release readiness
|
||||||
|
|
||||||
|
## [1.0.0] - 2026-02-11
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- Cross-platform support (macOS, Linux, Windows)
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- `stop-session-reminder.mjs`: Use `path.basename`/`path.dirname` instead of hardcoded `/` split
|
||||||
|
- `backup-before-change.mjs`: Handle both `/` and `\` path separators in safe filename generation
|
||||||
|
|
||||||
|
### Removed
|
||||||
|
- "Windows: hooks are 100% bash" from known gaps (was incorrect — all hooks are Node.js)
|
||||||
|
|
||||||
|
## [0.7.0] - 2026-02-07
|
||||||
|
|
||||||
|
### Note
|
||||||
|
Version reset from 1.2.0 to reflect actual maturity. Previous version was inflated — this plugin has never been externally tested.
|
||||||
|
|
||||||
|
### What exists today
|
||||||
|
- 6 specialized agents (scanner, analyzer, interviewer, planner, implementer, verifier)
|
||||||
|
- Full machine-wide Claude Code configuration discovery
|
||||||
|
- Scope selection (current project, repo, home, full machine)
|
||||||
|
- Inheritance hierarchy mapping and conflict detection
|
||||||
|
- Mandatory backups before any changes
|
||||||
|
- Rollback support
|
||||||
|
- Syntax validation for all configuration files
|
||||||
|
- Quick audit-only mode
|
||||||
|
- Full optimization workflow with HITL checkpoints
|
||||||
|
|
||||||
|
### Known gaps
|
||||||
|
- Testing: no automated tests
|
||||||
|
- Onboarding: never verified that a new user can install and use from scratch
|
||||||
|
- External verification: nobody else has ever used this
|
||||||
160
plugins/config-audit/CLAUDE.md
Normal file
160
plugins/config-audit/CLAUDE.md
Normal file
|
|
@ -0,0 +1,160 @@
|
||||||
|
# Config-Audit Plugin
|
||||||
|
|
||||||
|
Claude Code Configuration Intelligence — know if your configuration is correct, find what could improve it, fix it automatically.
|
||||||
|
|
||||||
|
## What this plugin does
|
||||||
|
|
||||||
|
Analyzes and optimizes Claude Code configuration across three pillars:
|
||||||
|
- **Health** — Deterministic scanners verify correctness, consistency, and completeness
|
||||||
|
- **Opportunities** — Context-aware recommendations for features that could benefit your project
|
||||||
|
- **Action** — Auto-fix with backup/rollback
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### Core (just run `/config-audit` to get started)
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit` | Full audit with auto-scope detection (no setup needed) |
|
||||||
|
| `/config-audit posture` | Quick health scorecard (A-F grades, 7 quality areas) |
|
||||||
|
| `/config-audit feature-gap` | Context-aware feature recommendations grouped by impact |
|
||||||
|
| `/config-audit fix` | Auto-fix deterministic issues with backup + verification |
|
||||||
|
| `/config-audit rollback` | Restore configuration from backup |
|
||||||
|
| `/config-audit plan` | Create action plan from audit findings |
|
||||||
|
| `/config-audit implement` | Execute plan with backups + auto-verify |
|
||||||
|
| `/config-audit help` | Show all commands |
|
||||||
|
|
||||||
|
### Additional
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit drift` | Compare current config against saved baseline |
|
||||||
|
| `/config-audit plugin-health` | Audit plugin structure, frontmatter, cross-plugin coherence |
|
||||||
|
| `/config-audit discover` | Run discovery phase only |
|
||||||
|
| `/config-audit analyze` | Run analysis phase only |
|
||||||
|
| `/config-audit interview` | Gather user preferences (opt-in) |
|
||||||
|
| `/config-audit status` | Show current session state |
|
||||||
|
| `/config-audit cleanup` | Clean up old sessions |
|
||||||
|
|
||||||
|
## Agents
|
||||||
|
|
||||||
|
| Agent | Role | Model | Color | Tools |
|
||||||
|
|-------|------|-------|-------|-------|
|
||||||
|
| scanner-agent | Find config files | haiku | cyan | Read, Glob, Grep, Write |
|
||||||
|
| analyzer-agent | Generate report | sonnet | blue | Read, Glob, Grep, Write |
|
||||||
|
| planner-agent | Create action plan | opus | yellow | Read, Glob, Write |
|
||||||
|
| implementer-agent | Execute changes | sonnet | magenta | Read, Write, Edit, Bash, Glob |
|
||||||
|
| verifier-agent | Verify results | haiku | purple | Read, Glob, Grep |
|
||||||
|
| feature-gap-agent | Context-aware feature recommendations | opus | green | Read, Glob, Grep, Write |
|
||||||
|
|
||||||
|
## Deterministic Scanners
|
||||||
|
|
||||||
|
Node.js scanners (zero external dependencies), run via `node scanners/scan-orchestrator.mjs <path>`.
|
||||||
|
Posture CLI: `node scanners/posture.mjs <path> [--json] [--global] [--full-machine] [--output-file path]`.
|
||||||
|
Scanner CLI: `node scanners/scan-orchestrator.mjs <path> [--global] [--full-machine] [--no-suppress]`.
|
||||||
|
|
||||||
|
| Scanner | Prefix | Detects |
|
||||||
|
|---------|--------|---------|
|
||||||
|
| `claude-md-linter.mjs` | CML | Structure, length, sections, @imports, duplicates, TODOs |
|
||||||
|
| `settings-validator.mjs` | SET | Schema, unknown/deprecated keys, type mismatches, permissions |
|
||||||
|
| `hook-validator.mjs` | HKV | Format, script existence, event validity, timeouts |
|
||||||
|
| `rules-validator.mjs` | RUL | Glob matching, orphan rules, deprecated fields, unscoped rules |
|
||||||
|
| `mcp-config-validator.mjs` | MCP | Server types, trust levels, env vars, unknown fields |
|
||||||
|
| `import-resolver.mjs` | IMP | Broken @imports, circular refs, deep chains, tilde paths |
|
||||||
|
| `conflict-detector.mjs` | CNF | Settings conflicts, permission contradictions, hook duplicates |
|
||||||
|
| `feature-gap-scanner.mjs` | GAP | 25 feature checks across 4 tiers — shown as opportunities, not grades |
|
||||||
|
|
||||||
|
### Scanner Lib (`scanners/lib/`)
|
||||||
|
|
||||||
|
| Module | Purpose |
|
||||||
|
|--------|---------|
|
||||||
|
| `severity.mjs` | Severity constants, risk scoring, verdict logic |
|
||||||
|
| `output.mjs` | Finding objects (CA-XXX-NNN format), scanner results, envelope |
|
||||||
|
| `file-discovery.mjs` | Config file discovery: single-path, multi-path (`discoverConfigFilesMulti`), full-machine (`discoverFullMachinePaths`) |
|
||||||
|
| `yaml-parser.mjs` | Frontmatter parsing, JSON parsing, @import/section extraction |
|
||||||
|
| `string-utils.mjs` | Line counting, truncation, similarity, key extraction |
|
||||||
|
| `scoring.mjs` | Area scoring, health scorecard, legacy utilization/maturity |
|
||||||
|
| `backup.mjs` | Backup creation, manifest parsing, checksum verification |
|
||||||
|
| `diff-engine.mjs` | Drift diffing: diffEnvelopes(), formatDiffReport() |
|
||||||
|
| `baseline.mjs` | Baseline save/load/list/delete for drift detection |
|
||||||
|
| `report-generator.mjs` | Unified markdown reports: posture, drift, plugin health |
|
||||||
|
| `suppression.mjs` | .config-audit-ignore parsing, finding suppression, audit trail |
|
||||||
|
|
||||||
|
### Action Engines (`scanners/`)
|
||||||
|
|
||||||
|
| Module | Purpose |
|
||||||
|
|--------|---------|
|
||||||
|
| `fix-engine.mjs` | planFixes(), applyFixes(), verifyFixes() — 9 fix types |
|
||||||
|
| `rollback-engine.mjs` | listBackups(), restoreBackup(), deleteBackup() |
|
||||||
|
| `fix-cli.mjs` | CLI: `node fix-cli.mjs <path> [--apply] [--json] [--global]` |
|
||||||
|
| `drift-cli.mjs` | CLI: `node drift-cli.mjs <path> [--save] [--baseline name] [--json]` |
|
||||||
|
|
||||||
|
### Standalone Scanner
|
||||||
|
|
||||||
|
| Module | Prefix | Purpose |
|
||||||
|
|--------|--------|---------|
|
||||||
|
| `plugin-health-scanner.mjs` | PLH | Plugin structure, frontmatter, cross-plugin conflicts (runs independently) |
|
||||||
|
| `self-audit.mjs` | — | Runs all scanners + plugin health on this plugin itself |
|
||||||
|
|
||||||
|
## Knowledge Base (`knowledge/`)
|
||||||
|
|
||||||
|
| File | Content |
|
||||||
|
|------|---------|
|
||||||
|
| `claude-code-capabilities.md` | Feature register: 18 config surfaces, Anthropic guidance, relevance table |
|
||||||
|
| `configuration-best-practices.md` | Per-layer best practices |
|
||||||
|
| `anti-patterns.md` | Common mistakes mapped to scanner IDs |
|
||||||
|
| `hook-events-reference.md` | All 26 hook events with details |
|
||||||
|
| `feature-evolution.md` | Feature timeline for staleness detection |
|
||||||
|
| `gap-closure-templates.md` | Config-specific templates for closing gaps |
|
||||||
|
|
||||||
|
## Hooks
|
||||||
|
|
||||||
|
| Event | Script | Purpose |
|
||||||
|
|-------|--------|---------|
|
||||||
|
| PreToolUse | `auto-backup-config.mjs` | Auto-backup config files before Edit/Write |
|
||||||
|
| PostToolUse | `post-edit-verify.mjs` | Verify config files after Edit/Write, block on new critical/high |
|
||||||
|
| SessionStart | `session-start.mjs` | Checks for active (unfinished) sessions |
|
||||||
|
| Stop | `stop-session-reminder.mjs` | Reminds about current session phase |
|
||||||
|
|
||||||
|
## Suppressions
|
||||||
|
|
||||||
|
Create `.config-audit-ignore` at project root to suppress known findings:
|
||||||
|
```
|
||||||
|
CA-SET-003 # Exact ID
|
||||||
|
CA-GAP-* # Glob pattern (all GAP findings)
|
||||||
|
```
|
||||||
|
Suppressed findings tracked in envelope's `suppressed_findings` for audit trail. Disable with `--no-suppress`.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
```
|
||||||
|
/config-audit → discover + analyze (auto) → plan → implement → verify
|
||||||
|
```
|
||||||
|
Default: auto-detects scope from git context. Override with `/config-audit full|repo|home|current`. Delta mode: `--delta` (incremental).
|
||||||
|
|
||||||
|
### Session Directory
|
||||||
|
```
|
||||||
|
~/.claude/config-audit/sessions/{session-id}/
|
||||||
|
├── scope.yaml, discovery.json, state.yaml
|
||||||
|
├── findings/, analysis-report.md, action-plan.md
|
||||||
|
├── backups/, implementation-log.md
|
||||||
|
└── interview.md (if interview run)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Finding ID Format
|
||||||
|
`CA-{SCANNER}-{NNN}` — e.g. `CA-CML-001`, `CA-SET-003`, `CA-HKV-002`, `CA-RUL-005`
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node --test 'tests/**/*.test.mjs'
|
||||||
|
```
|
||||||
|
|
||||||
|
486 tests across 27 test files (10 lib + 16 scanner + 1 hook). Test fixtures in `tests/fixtures/`.
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- Session directories accumulate — use `/config-audit cleanup` to manage
|
||||||
|
- Scanners run on Node.js >= 18 (uses node:test, node:fs/promises)
|
||||||
|
- Plugin CLAUDE.md files in node_modules should be excluded via scope
|
||||||
21
plugins/config-audit/LICENSE
Normal file
21
plugins/config-audit/LICENSE
Normal file
|
|
@ -0,0 +1,21 @@
|
||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2025-2026 Kjell Tore Guttormsen
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
||||||
465
plugins/config-audit/README.md
Normal file
465
plugins/config-audit/README.md
Normal file
|
|
@ -0,0 +1,465 @@
|
||||||
|
# Config-Audit Plugin for Claude Code
|
||||||
|
|
||||||
|
> Know if your configuration is correct. Find what could improve it. Fix it automatically.
|
||||||
|
|
||||||
|
*Built for my own Claude Code workflow and shared openly for anyone who finds it useful. This is a solo project — bug reports and feature requests are welcome, but pull requests are not accepted.*
|
||||||
|
|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
|
A Claude Code plugin that checks configuration health, suggests context-aware improvements, and auto-fixes issues — `CLAUDE.md`, `settings.json`, hooks, rules, MCP servers, `@imports`, and plugins. 7 quality scanners for correctness, context-aware feature recommendations, auto-fix with backup/rollback. Zero external dependencies.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [What Is This?](#what-is-this)
|
||||||
|
- [The Configuration Problem](#the-configuration-problem)
|
||||||
|
- [Quick Start](#quick-start)
|
||||||
|
- [The Feature Gap — Your Biggest Blind Spot](#the-feature-gap--your-biggest-blind-spot)
|
||||||
|
- [Workflow Examples](#workflow-examples)
|
||||||
|
- [Commands](#commands)
|
||||||
|
- [Deterministic Scanners](#deterministic-scanners)
|
||||||
|
- [Agent Architecture](#agent-architecture)
|
||||||
|
- [Hooks & Safety](#hooks--safety)
|
||||||
|
- [Suppressions](#suppressions)
|
||||||
|
- [Examples & Self-Audit](#examples--self-audit)
|
||||||
|
- [Data Storage & Safety Guarantees](#data-storage--safety-guarantees)
|
||||||
|
- [What This Plugin Does Not Cover](#what-this-plugin-does-not-cover)
|
||||||
|
- [Version History](#version-history)
|
||||||
|
- [License](#license)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What Is This?
|
||||||
|
|
||||||
|
Claude Code reads instructions from at least 7 different file types across multiple scopes: `CLAUDE.md`, `settings.json`, `.claude/rules/`, `hooks.json`, `.mcp.json`, `.claudeignore`, and `settings.local.json`. Each can exist at project level, user level, or both. Plugins add more. The system is powerful — but nobody tells you what you're using wrong, what you're missing, or what's silently conflicting.
|
||||||
|
|
||||||
|
This plugin provides three layers of configuration intelligence:
|
||||||
|
|
||||||
|
- **Health** — 7 deterministic scanners verify correctness across every configuration file, catching broken imports, deprecated settings, conflicting rules, format errors, and permission contradictions
|
||||||
|
- **Opportunities** — context-aware recommendations for Claude Code features that could benefit your specific project, backed by Anthropic's official guidance
|
||||||
|
- **Action** — auto-fix with mandatory backups, syntax validation, rollback support, and a human-in-the-loop workflow for anything non-trivial
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> Start with `/config-audit posture` for a 30-second scorecard, then `/config-audit` for the full picture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Configuration Problem
|
||||||
|
|
||||||
|
You've been using Claude Code for weeks — maybe months. It works fine. But there's a gap between "works fine" and "configured well," and it's invisible until someone shows you.
|
||||||
|
|
||||||
|
**These are not hypotheticals.** They come from running the posture scanner on real setups:
|
||||||
|
|
||||||
|
- Your global `CLAUDE.md` says "never use mocks" but a project rule says "prefer mocks" — Claude gets confused and you don't know why
|
||||||
|
- You've written dozens of projects but have never set up hooks, rules, or keybindings because you didn't know they existed
|
||||||
|
- Three plugins define hooks for the same event with conflicting behavior
|
||||||
|
- Your `settings.json` has a deprecated key that silently does nothing
|
||||||
|
- An `@import` in your CLAUDE.md points to a file you deleted last week
|
||||||
|
- You're using maybe 30% of what Claude Code can do — and you don't know what the other 70% is
|
||||||
|
|
||||||
|
The plugin ships with two example projects. Run them yourself:
|
||||||
|
|
||||||
|
### `examples/minimal-setup/` — just a CLAUDE.md, nothing else
|
||||||
|
|
||||||
|
```
|
||||||
|
> node scanners/posture.mjs examples/minimal-setup/
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
Config-Audit Health Score
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
Health: A (99/100) 7 areas scanned
|
||||||
|
|
||||||
|
Area Scores
|
||||||
|
───────────
|
||||||
|
CLAUDE.md ............ A (90)
|
||||||
|
Settings ............. A (100) Hooks ............... A (100)
|
||||||
|
Rules ................ A (100) MCP ................. A (100)
|
||||||
|
Imports .............. A (100) Conflicts ........... A (100)
|
||||||
|
|
||||||
|
22 opportunities available — run /config-audit feature-gap for recommendations
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
```
|
||||||
|
|
||||||
|
**Grade A** — nothing is broken. The health grade only reflects real issues, and this setup has none. The 22 opportunities are not failures — they're features you *could* use. Run `/config-audit feature-gap` to see which ones are relevant to your project.
|
||||||
|
|
||||||
|
### `examples/optimal-setup/` — full configuration across all 4 tiers
|
||||||
|
|
||||||
|
```
|
||||||
|
> node scanners/posture.mjs examples/optimal-setup/
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
Config-Audit Health Score
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
Health: A (93/100) 7 areas scanned
|
||||||
|
|
||||||
|
Area Scores
|
||||||
|
───────────
|
||||||
|
CLAUDE.md ............ A (100) Settings ............ A (90)
|
||||||
|
Hooks ................ A (100) Rules ............... B (80)
|
||||||
|
MCP .................. A (90) Imports ............. A (100)
|
||||||
|
Conflicts ............ A (90)
|
||||||
|
|
||||||
|
3 opportunities available — run /config-audit feature-gap for recommendations
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
```
|
||||||
|
|
||||||
|
Also **Grade A** — with only 3 opportunities remaining. This project has CLAUDE.md split via `@imports`, permissions scoped to specific tools, path-scoped rules (different rules for `src/` vs. `tests/`), hooks covering multiple events, and MCP servers. Both setups are healthy — the difference is how much of Claude Code's surface area you're choosing to use.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) installed
|
||||||
|
- Node.js 18+ (for standalone CLI tools)
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
Clone from the public repository:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://git.fromaitochitta.com/open/claude-code-config-audit.git
|
||||||
|
```
|
||||||
|
|
||||||
|
Or add as a Claude Code plugin:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"enabledPlugins": {
|
||||||
|
"config-audit@plugin-marketplace": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### First Scan
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full audit with auto-scope detection (inside Claude Code)
|
||||||
|
/config-audit
|
||||||
|
|
||||||
|
# 30-second posture check (standalone, no LLM needed)
|
||||||
|
node scanners/posture.mjs /path/to/project
|
||||||
|
|
||||||
|
# Auto-fix issues with backup
|
||||||
|
node scanners/fix-cli.mjs /path/to/project --apply
|
||||||
|
```
|
||||||
|
|
||||||
|
The CLI tools work standalone — no Claude Code session needed, just Node.js 18+.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Feature Opportunities — Context-Aware Recommendations
|
||||||
|
|
||||||
|
Most configuration tools stop at "is it valid?" Config-audit goes further: **what could improve your setup, and is it relevant to your project?**
|
||||||
|
|
||||||
|
The feature opportunity scanner checks 25 dimensions and groups recommendations by impact:
|
||||||
|
|
||||||
|
| Impact Level | Focus | Examples |
|
||||||
|
|--------------|-------|---------|
|
||||||
|
| **High** | Correctness & security | `permissions.deny` for sensitive files, basic hooks for safety automation |
|
||||||
|
| **Worth Considering** | Workflow efficiency | Path-scoped rules, modular `@imports`, custom agents |
|
||||||
|
| **Explore** | Nice-to-have | Keybindings, status line, output styles, agent teams |
|
||||||
|
|
||||||
|
Each recommendation is **context-aware** — it considers what your project actually contains. A solo TypeScript project gets different suggestions than a team Python monorepo. Recommendations include *why* (backed by Anthropic's official guidance) and *how* (concrete steps).
|
||||||
|
|
||||||
|
Run `/config-audit feature-gap` to see what's relevant to your project.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Workflow Examples
|
||||||
|
|
||||||
|
### 1. First Time — Just Curious
|
||||||
|
|
||||||
|
You heard about this plugin and want to know where you stand:
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit # Auto-detects scope, runs full audit
|
||||||
|
# → See your grade, top issues, and gaps
|
||||||
|
/config-audit posture # Even faster: 30-second scorecard only
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Monthly Configuration Checkup
|
||||||
|
|
||||||
|
A quick health check — are things still clean?
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit posture # Quick health check (A-F grade, 7 areas)
|
||||||
|
/config-audit # Full audit if grade dropped
|
||||||
|
/config-audit fix # Auto-fix deterministic issues
|
||||||
|
/config-audit posture # Verify improvement
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Deep Optimization
|
||||||
|
|
||||||
|
You want to go from C to A. The full pipeline:
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit # Audit — understand what you have
|
||||||
|
/config-audit feature-gap # Opportunities — context-aware recommendations
|
||||||
|
/config-audit plan # Plan — prioritized actions with risk assessment
|
||||||
|
/config-audit implement # Execute — changes with backup + verification
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Plugin Author
|
||||||
|
|
||||||
|
You maintain Claude Code plugins and want to ensure quality:
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit plugin-health # Audit plugin structure, frontmatter, cross-plugin conflicts
|
||||||
|
# → Checks naming, frontmatter completeness, tool grants, duplicates
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Track Configuration Drift
|
||||||
|
|
||||||
|
Your team configuration changes over time. Track it:
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit drift # First run creates baseline, subsequent runs show delta
|
||||||
|
# → New findings, resolved findings, unchanged, moved
|
||||||
|
/config-audit drift --save my-baseline # Save a named baseline for comparison
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
### Core (just run `/config-audit` to get started)
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit` | Full audit with auto-scope detection (no setup needed) |
|
||||||
|
| `/config-audit posture` | Quick health scorecard: A-F grades across 7 quality areas |
|
||||||
|
| `/config-audit feature-gap` | Context-aware feature recommendations grouped by impact |
|
||||||
|
| `/config-audit fix` | Auto-fix deterministic issues with backup + verification |
|
||||||
|
| `/config-audit rollback` | Restore configuration from a previous backup |
|
||||||
|
| `/config-audit plan` | Generate prioritized action plan from audit findings |
|
||||||
|
| `/config-audit implement` | Execute plan with automatic backup + verification |
|
||||||
|
| `/config-audit help` | Show all commands with usage examples |
|
||||||
|
|
||||||
|
### Additional
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit drift` | Compare current config against a saved baseline |
|
||||||
|
| `/config-audit plugin-health` | Audit plugin structure, frontmatter, cross-plugin coherence |
|
||||||
|
| `/config-audit discover` | Run discovery phase only |
|
||||||
|
| `/config-audit analyze` | Run analysis phase only |
|
||||||
|
| `/config-audit interview` | Set preferences for action plan _(optional)_ |
|
||||||
|
| `/config-audit status` | Show current session state and available actions |
|
||||||
|
| `/config-audit cleanup` | Remove old session directories |
|
||||||
|
|
||||||
|
### Scope
|
||||||
|
|
||||||
|
By default, `/config-audit` auto-detects scope from your git context. Override with: `/config-audit current`, `/config-audit repo`, `/config-audit home`, `/config-audit full`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deterministic Scanners
|
||||||
|
|
||||||
|
8 Node.js scanners that perform structural analysis an LLM cannot reliably do: schema validation, circular reference detection, import resolution, conflict detection across scopes. Zero external dependencies.
|
||||||
|
|
||||||
|
**Why deterministic?** LLMs are powerful at understanding intent and context. But they cannot reliably validate JSON schemas, detect circular `@import` chains, or catch that your global `settings.json` contradicts your project-level one. These scanners fill that gap — fast, repeatable, and zero false positives on structural issues.
|
||||||
|
|
||||||
|
| Scanner | Prefix | What It Catches |
|
||||||
|
|---------|--------|-----------------|
|
||||||
|
| `claude-md-linter.mjs` | CML | Oversized files, missing sections, broken @imports, duplicates, stale TODOs |
|
||||||
|
| `settings-validator.mjs` | SET | Schema violations, unknown/deprecated keys, type mismatches, permission issues |
|
||||||
|
| `hook-validator.mjs` | HKV | Invalid format, missing scripts, wrong event names, timeout risks |
|
||||||
|
| `rules-validator.mjs` | RUL | Bad glob patterns, orphaned rules, deprecated fields, unscoped rules |
|
||||||
|
| `mcp-config-validator.mjs` | MCP | Invalid server types, missing trust levels, exposed env vars |
|
||||||
|
| `import-resolver.mjs` | IMP | Broken @imports, circular references, deep chains, tilde path issues |
|
||||||
|
| `conflict-detector.mjs` | CNF | Settings contradictions across scopes, permission conflicts, hook duplicates |
|
||||||
|
| `feature-gap-scanner.mjs` | GAP | 25 feature checks — shown as opportunities, not grades |
|
||||||
|
|
||||||
|
### CLI Tools
|
||||||
|
|
||||||
|
All tools work standalone — no Claude Code session needed:
|
||||||
|
|
||||||
|
| Tool | Usage |
|
||||||
|
|------|-------|
|
||||||
|
| **Posture** | `node scanners/posture.mjs <path> [--json] [--global]` |
|
||||||
|
| **Fix** | `node scanners/fix-cli.mjs <path> [--apply] [--json] [--global]` |
|
||||||
|
| **Drift** | `node scanners/drift-cli.mjs <path> [--save] [--baseline name] [--json]` |
|
||||||
|
| **Self-audit** | `node scanners/self-audit.mjs [--json] [--fix]` |
|
||||||
|
| **Full scan** | `node scanners/scan-orchestrator.mjs <path> [--global] [--no-suppress]` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Architecture
|
||||||
|
|
||||||
|
Six specialized agents collaborate through the audit workflow, each matched to an appropriate model for cost and quality:
|
||||||
|
|
||||||
|
| Agent | Model | Role |
|
||||||
|
|-------|-------|------|
|
||||||
|
| **scanner-agent** | Haiku | Fast filesystem scanning, file discovery |
|
||||||
|
| **analyzer-agent** | Sonnet | Deep analysis, hierarchy mapping, conflict detection |
|
||||||
|
| **planner-agent** | Opus | Action plan generation with risk assessment |
|
||||||
|
| **implementer-agent** | Sonnet | Change execution with mandatory backups |
|
||||||
|
| **verifier-agent** | Haiku | Post-implementation verification |
|
||||||
|
| **feature-gap-agent** | Opus | Context-aware feature recommendations |
|
||||||
|
|
||||||
|
### Orchestration Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
+-----------+
|
||||||
|
| Interview | (optional)
|
||||||
|
+-----+-----+
|
||||||
|
|
|
||||||
|
+-----------+ +---------+ +-------v---+ +-----------+
|
||||||
|
| Discover | --> | Analyze | --> | Plan | --> | Implement |
|
||||||
|
| (haiku) | | (sonnet)| | (opus) | | (sonnet) |
|
||||||
|
+-----------+ +---------+ +-----------+ +-----+-----+
|
||||||
|
|
|
||||||
|
+-----v-----+
|
||||||
|
| Verify |
|
||||||
|
| (haiku) |
|
||||||
|
+-----------+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hooks & Safety
|
||||||
|
|
||||||
|
Four hooks provide automatic safety and session continuity — they activate the moment the plugin is installed:
|
||||||
|
|
||||||
|
| Event | Script | What It Does |
|
||||||
|
|-------|--------|--------------|
|
||||||
|
| **PreToolUse** | `auto-backup-config.mjs` | Backs up any config file before Edit/Write touches it |
|
||||||
|
| **PostToolUse** | `post-edit-verify.mjs` | Re-scans after edits — blocks if new critical/high findings introduced |
|
||||||
|
| **SessionStart** | `session-start.mjs` | Checks for incomplete audit sessions so you can resume |
|
||||||
|
| **Stop** | `stop-session-reminder.mjs` | Shows current phase so your next session picks up where you left off |
|
||||||
|
|
||||||
|
All hooks are Node.js (`.mjs`) for cross-platform compatibility (macOS, Linux, Windows).
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> The PreToolUse and PostToolUse hooks only activate when config-audit is modifying configuration files. They don't interfere with your normal development workflow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Suppressions
|
||||||
|
|
||||||
|
Some findings are expected — maybe you intentionally have a large CLAUDE.md, or a feature gap doesn't apply to your workflow. Create a `.config-audit-ignore` file to suppress them:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Suppress by exact finding ID
|
||||||
|
CA-SET-003
|
||||||
|
|
||||||
|
# Suppress by scanner prefix (glob pattern)
|
||||||
|
CA-GAP-*
|
||||||
|
|
||||||
|
# Suppress all plugin health findings
|
||||||
|
CA-PLH-*
|
||||||
|
```
|
||||||
|
|
||||||
|
Suppressed findings are tracked in the scan envelope's `suppressed_findings` array for audit trail — nothing is silently hidden. Use `--no-suppress` to see everything.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples & Self-Audit
|
||||||
|
|
||||||
|
### Example Projects
|
||||||
|
|
||||||
|
The `examples/` directory contains two projects shown in the [before/after demo](#the-configuration-problem) above:
|
||||||
|
|
||||||
|
| Example | Description | Grade | Opportunities |
|
||||||
|
|---------|-------------|-------|---------------|
|
||||||
|
| `minimal-setup/` | Single CLAUDE.md, nothing else | A | 22 |
|
||||||
|
| `optimal-setup/` | Full configuration across all 4 tiers | A | 3 |
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run them yourself
|
||||||
|
node scanners/posture.mjs examples/minimal-setup/
|
||||||
|
node scanners/posture.mjs examples/optimal-setup/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Self-Audit: Scanning the Scanner
|
||||||
|
|
||||||
|
The plugin runs all 8 scanners on itself via `self-audit.mjs`. Current result: **Grade A, score 98, 0 real findings.** Test fixtures and example files are automatically excluded from scoring — a security plugin that ships deliberately broken examples shouldn't fail its own audit.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node scanners/self-audit.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Storage & Safety Guarantees
|
||||||
|
|
||||||
|
### Where Data Lives
|
||||||
|
|
||||||
|
All data stays local at `~/.claude/config-audit/sessions/`:
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.claude/config-audit/sessions/{session-id}/
|
||||||
|
scope.yaml # Scan boundaries
|
||||||
|
discovery.json # File manifest
|
||||||
|
findings/ # Individual issues (YAML)
|
||||||
|
analysis-report.md # Full report
|
||||||
|
action-plan.md # Prioritized actions
|
||||||
|
backups/ # Pre-modification copies
|
||||||
|
implementation-log.md # Change log
|
||||||
|
state.yaml # Phase tracking
|
||||||
|
```
|
||||||
|
|
||||||
|
### Safety Guarantees
|
||||||
|
|
||||||
|
This plugin is cautious by design — configuration files are important, and a bad edit can break your entire Claude Code setup:
|
||||||
|
|
||||||
|
| Guarantee | How |
|
||||||
|
|-----------|-----|
|
||||||
|
| **Backups mandatory** | Every file is copied before modification — no exceptions |
|
||||||
|
| **Read-only audit** | `/config-audit` and `/config-audit posture` analyze without changing anything |
|
||||||
|
| **Rollback support** | `/config-audit rollback` restores from any backup |
|
||||||
|
| **Syntax validation** | Every change is validated before finalization |
|
||||||
|
| **Verification pass** | A separate agent confirms changes actually work |
|
||||||
|
| **Human-in-the-loop** | You approve the plan before anything is implemented |
|
||||||
|
| **Post-edit guard** | Hook blocks the session if a new critical/high finding is introduced |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Plugin Does Not Cover
|
||||||
|
|
||||||
|
- **Runtime behavior** — this plugin audits configuration files, not what Claude actually does at runtime. For runtime defense, see [claude-code-llm-security](https://git.fromaitochitta.com/open/claude-code-llm-security)
|
||||||
|
- **Secret scanning** — config-audit checks for structural issues, not leaked credentials. Use llm-security for secret detection
|
||||||
|
- **Custom scanner rules** — scanners check against known Claude Code configuration schemas. Custom rule definitions are not supported
|
||||||
|
- **Remote/team configuration** — managed settings, SSO-provisioned config, and organization-level policies are detected as gaps but not managed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
| Version | Date | Highlights |
|
||||||
|
|---------|------|-----------|
|
||||||
|
| **3.0.1** | 2026-04-04 | Cross-platform fix: Windows path separators. 486 tests |
|
||||||
|
| **3.0.0** | 2026-04-04 | Health redesign: quality-only grades, context-aware opportunities (replaces utilization/maturity/segment), Anthropic guidance. 482 tests |
|
||||||
|
| **2.2.0** | 2026-04-04 | Fixture filtering (test findings excluded from grades), session path fix, UX polish. 461 tests |
|
||||||
|
| **2.1.0** | 2026-04-03 | UX redesign: auto-scope, zero questions, simplified commands (15 from 17). 441+ tests |
|
||||||
|
| **2.0.0** | 2026-04-03 | Complete rewrite: 8 scanners, 25 gap dimensions, auto-fix, drift, suppressions, self-audit. 408+ tests |
|
||||||
|
| **1.6.0** | 2026-04-03 | Report generator, suppression engine, self-audit CLI, PostToolUse hook |
|
||||||
|
| **1.5.0** | 2026-04-03 | Diff engine, baseline manager, drift CLI, plugin health scanner |
|
||||||
|
| **1.4.0** | 2026-04-03 | Fix engine, rollback engine, fix CLI, PreToolUse hook |
|
||||||
|
| **1.3.0** | 2026-04-03 | Scoring module, posture CLI, feature-gap agent |
|
||||||
|
| **1.2.0** | 2026-04-03 | 4 advanced scanners (MCP, import, conflict, feature-gap) |
|
||||||
|
| **1.1.0** | 2026-04-03 | 4 core scanners, scan orchestrator, test infrastructure |
|
||||||
|
| **1.0.0** | 2026-02-11 | Cross-platform support |
|
||||||
|
| **0.7.0** | 2026-02-07 | Initial version (version reset from inflated 1.2.0) |
|
||||||
|
|
||||||
|
See [CHANGELOG.md](CHANGELOG.md) for full details.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
[MIT License](LICENSE) — Copyright (c) 2025-2026 Kjell Tore Guttormsen
|
||||||
175
plugins/config-audit/agents/analyzer-agent.md
Normal file
175
plugins/config-audit/agents/analyzer-agent.md
Normal file
|
|
@ -0,0 +1,175 @@
|
||||||
|
---
|
||||||
|
name: analyzer-agent
|
||||||
|
description: Analyze Claude Code configuration findings and generate comprehensive reports with hierarchy maps, conflict detection, and quality scores.
|
||||||
|
model: sonnet
|
||||||
|
color: blue
|
||||||
|
tools: ["Read", "Glob", "Grep", "Write"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyzer Agent
|
||||||
|
|
||||||
|
Comprehensive analysis agent that processes scanner findings and generates detailed reports.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Analyze all discovered configuration files to:
|
||||||
|
1. Map the complete inheritance hierarchy
|
||||||
|
2. Detect conflicts between configuration levels
|
||||||
|
3. Identify duplicate rules across files
|
||||||
|
4. Find optimization opportunities
|
||||||
|
5. Flag security issues
|
||||||
|
6. Validate imports and rules
|
||||||
|
7. Score CLAUDE.md quality
|
||||||
|
8. Generate actionable recommendations
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
1. Session ID with findings in `~/.claude/config-audit/sessions/{session-id}/findings/`
|
||||||
|
2. Scope configuration from `~/.claude/config-audit/sessions/{session-id}/scope.yaml`
|
||||||
|
3. Scanner JSON envelope (if available) from scan-orchestrator.mjs
|
||||||
|
4. Knowledge base at `{CLAUDE_PLUGIN_ROOT}/knowledge/` for best practices and anti-patterns
|
||||||
|
|
||||||
|
## Task
|
||||||
|
|
||||||
|
1. **Load all findings**: Read all `*.yaml` files from findings directory
|
||||||
|
1.5. **Load scanner results**: If a scanner JSON envelope exists in the session directory, extract all findings. Cross-reference against `knowledge/anti-patterns.md` to add remediation context. Note any CA-{prefix}-NNN finding IDs in the report.
|
||||||
|
2. **Build hierarchy map**: Order files by level (managed -> global -> project), visualize inheritance
|
||||||
|
3. **Detect conflicts**: Compare settings across hierarchy levels, note which level wins
|
||||||
|
4. **Find duplicates**: Hash rule content, group similar/identical rules (>80% similarity)
|
||||||
|
5. **Identify optimizations**: Rules to globalize, missing configs, orphaned files
|
||||||
|
6. **Security scan**: Aggregate secret warnings, check for insecure patterns
|
||||||
|
7. **CLAUDE.md quality assessment**: Score each file against rubric, assign letter grades
|
||||||
|
8. **Generate report**: Write comprehensive markdown report
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write to: `~/.claude/config-audit/sessions/{session-id}/analysis-report.md`
|
||||||
|
|
||||||
|
**Output MUST NOT exceed 300 lines.** Prioritize findings by severity. Use tables, not prose.
|
||||||
|
|
||||||
|
Report structure:
|
||||||
|
0. Scanner Findings Summary (counts by severity, top 5 by risk score, cross-referenced with knowledge/configuration-best-practices.md)
|
||||||
|
1. Executive Summary (counts of files, issues, opportunities)
|
||||||
|
2. Hierarchy Map (compact ASCII visualization)
|
||||||
|
3. Conflicts Detected (table)
|
||||||
|
4. Duplicate Rules (table)
|
||||||
|
5. Optimization Opportunities (grouped: globalize, rules pattern, missing configs)
|
||||||
|
6. Security Findings (table with severity)
|
||||||
|
7. CLAUDE.md Quality Scores (table with grade + top issue per file)
|
||||||
|
8. Import & Rules Health (broken imports, orphaned rules)
|
||||||
|
9. Recommendations Summary (high/medium/low priority)
|
||||||
|
|
||||||
|
## CLAUDE.md Quality Rubric (100 points)
|
||||||
|
|
||||||
|
This is the **authoritative scoring rubric** for CLAUDE.md quality assessment.
|
||||||
|
|
||||||
|
### 1. Commands/Workflows (20 points)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 20 | All essential commands documented with context. Build, test, lint, deploy present. Development workflow clear. Common operations documented. |
|
||||||
|
| 15 | Most commands present, some missing context |
|
||||||
|
| 10 | Basic commands only, no workflow |
|
||||||
|
| 5 | Few commands, many missing |
|
||||||
|
| 0 | No commands documented |
|
||||||
|
|
||||||
|
### 2. Architecture Clarity (20 points)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 20 | Clear codebase map. Key directories explained. Module relationships documented. Entry points identified. Data flow described. |
|
||||||
|
| 15 | Good structure overview, minor gaps |
|
||||||
|
| 10 | Basic directory listing only |
|
||||||
|
| 5 | Vague or incomplete |
|
||||||
|
| 0 | No architecture info |
|
||||||
|
|
||||||
|
### 3. Non-Obvious Patterns (15 points)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 15 | Gotchas and quirks captured. Known issues documented. Workarounds explained. Edge cases noted. "Why we do it this way" for unusual patterns. |
|
||||||
|
| 10 | Some patterns documented |
|
||||||
|
| 5 | Minimal pattern documentation |
|
||||||
|
| 0 | No patterns or gotchas |
|
||||||
|
|
||||||
|
### 4. Conciseness (15 points)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 15 | Dense, valuable content. No filler or obvious info. Each line adds value. No redundancy with code comments. |
|
||||||
|
| 10 | Mostly concise, some padding |
|
||||||
|
| 5 | Verbose in places |
|
||||||
|
| 0 | Mostly filler or restates obvious code |
|
||||||
|
|
||||||
|
### 5. Currency (15 points)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 15 | Reflects current codebase. Commands work as documented. File references accurate. Tech stack current. |
|
||||||
|
| 10 | Mostly current, minor staleness |
|
||||||
|
| 5 | Several outdated references |
|
||||||
|
| 0 | Severely outdated |
|
||||||
|
|
||||||
|
### 6. Actionability (15 points)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 15 | Instructions are executable. Commands can be copy-pasted. Steps are concrete. Paths are real. |
|
||||||
|
| 10 | Mostly actionable |
|
||||||
|
| 5 | Some vague instructions |
|
||||||
|
| 0 | Vague or theoretical |
|
||||||
|
|
||||||
|
### Letter Grades
|
||||||
|
|
||||||
|
| Grade | Score Range | Description |
|
||||||
|
|-------|-------------|-------------|
|
||||||
|
| A | 90-100 | Comprehensive, current, actionable |
|
||||||
|
| B | 70-89 | Good coverage, minor gaps |
|
||||||
|
| C | 50-69 | Basic info, missing key sections |
|
||||||
|
| D | 30-49 | Sparse or outdated |
|
||||||
|
| F | 0-29 | Missing or severely outdated |
|
||||||
|
|
||||||
|
### Red Flags
|
||||||
|
|
||||||
|
| Red Flag | Severity | Description |
|
||||||
|
|----------|----------|-------------|
|
||||||
|
| Failing commands | High | Commands that reference non-existent scripts/paths |
|
||||||
|
| Dead file references | High | References to deleted files/folders |
|
||||||
|
| Outdated tech | Medium | Mentions of deprecated or outdated technology versions |
|
||||||
|
| Uncustomized templates | Medium | Copy-paste from templates without project-specific customization |
|
||||||
|
| Unresolved TODOs | Medium | "TODO" items that were never completed |
|
||||||
|
| Generic advice | Low | Best practices not specific to the project |
|
||||||
|
| Duplicate content | Low | Same information repeated across multiple CLAUDE.md files |
|
||||||
|
|
||||||
|
### Section Detection Patterns
|
||||||
|
|
||||||
|
**Commands:** `## Commands`, `## Development`, `## Getting Started`, `## Quick Start`, `## Build`, `## Test`
|
||||||
|
|
||||||
|
**Architecture:** `## Architecture`, `## Project Structure`, `## Directory Structure`, `## Codebase Overview`, `## Key Files`
|
||||||
|
|
||||||
|
**Patterns/Gotchas:** `## Gotchas`, `## Patterns`, `## Known Issues`, `## Quirks`, `## Non-Obvious`, `## Important Notes`
|
||||||
|
|
||||||
|
### Quality Signals
|
||||||
|
|
||||||
|
**Positive:** Code blocks with working commands, file paths that exist, specific error messages and solutions, clear relationship to actual code, dense scannable content.
|
||||||
|
|
||||||
|
**Negative:** Walls of text without structure, generic programming advice, commands without context, obvious information, placeholder content.
|
||||||
|
|
||||||
|
## Conflict Detection
|
||||||
|
|
||||||
|
Compare same-named settings across hierarchy. Winner determination:
|
||||||
|
- Project-local beats project-shared
|
||||||
|
- Project beats global
|
||||||
|
- Global beats managed (user preference)
|
||||||
|
- Unless managed is enforced (enterprise)
|
||||||
|
|
||||||
|
## Quality Checks
|
||||||
|
|
||||||
|
Verify report: all findings referenced, recommendations actionable, severity levels consistent.
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
- Process findings in memory (typically < 1MB total)
|
||||||
|
- Generate report in single pass
|
||||||
|
- No file modifications (read-only except report output)
|
||||||
91
plugins/config-audit/agents/feature-gap-agent.md
Normal file
91
plugins/config-audit/agents/feature-gap-agent.md
Normal file
|
|
@ -0,0 +1,91 @@
|
||||||
|
---
|
||||||
|
name: feature-gap-agent
|
||||||
|
description: |
|
||||||
|
Analyzes Claude Code configuration and produces context-aware feature
|
||||||
|
recommendations grouped by impact. Frames unused features as opportunities,
|
||||||
|
not failures.
|
||||||
|
model: opus
|
||||||
|
color: green
|
||||||
|
tools: ["Read", "Glob", "Grep", "Write"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Feature Opportunities Agent
|
||||||
|
|
||||||
|
You analyze Claude Code configuration and produce context-aware recommendations — not grades.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You receive posture assessment data (JSON) containing:
|
||||||
|
- `areas` — per-scanner grades (7 quality areas + Feature Coverage)
|
||||||
|
- `overallGrade` — health grade (quality areas only)
|
||||||
|
- `opportunityCount` — number of unused features detected
|
||||||
|
- `scannerEnvelope` — full scanner results including GAP findings
|
||||||
|
|
||||||
|
You also receive project context: language, file count, existing configuration.
|
||||||
|
|
||||||
|
## Knowledge Files
|
||||||
|
|
||||||
|
Read **at most 3** of these files from the plugin's `knowledge/` directory:
|
||||||
|
- `claude-code-capabilities.md` — Feature register with "When relevant" guidance
|
||||||
|
- `configuration-best-practices.md` — Per-layer best practices
|
||||||
|
- `gap-closure-templates.md` — Templates for closing gaps with effort estimates
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write `feature-gap-report.md` to the session directory. Max 200 lines.
|
||||||
|
|
||||||
|
### Report Structure
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature Opportunities
|
||||||
|
|
||||||
|
**Date:** YYYY-MM-DD | **Health:** Grade (score/100) | **Opportunities:** N
|
||||||
|
|
||||||
|
## Your Project
|
||||||
|
|
||||||
|
[1-2 sentences describing detected context: language, size, what's already configured]
|
||||||
|
|
||||||
|
## High Impact
|
||||||
|
|
||||||
|
These address correctness or security — consider them seriously.
|
||||||
|
|
||||||
|
→ **[feature name]**
|
||||||
|
Why: [evidence-backed reason, cite Anthropic docs or proven issues]
|
||||||
|
How: [2-3 concrete steps]
|
||||||
|
|
||||||
|
[Repeat for each T1 finding]
|
||||||
|
|
||||||
|
## Worth Considering
|
||||||
|
|
||||||
|
These improve workflow efficiency for projects like yours.
|
||||||
|
|
||||||
|
→ **[feature name]**
|
||||||
|
Why: [reason, with "relevant because your project has X"]
|
||||||
|
How: [2-3 concrete steps]
|
||||||
|
|
||||||
|
[Repeat for each T2 finding]
|
||||||
|
|
||||||
|
## Explore When Ready
|
||||||
|
|
||||||
|
Nice-to-have features. Skip these if your current setup works well.
|
||||||
|
|
||||||
|
→ **[feature name]**
|
||||||
|
Why: [brief reason]
|
||||||
|
|
||||||
|
[Repeat for T3/T4 findings, keep brief]
|
||||||
|
|
||||||
|
## When You Might Skip These
|
||||||
|
|
||||||
|
[Honest qualification: which recommendations are genuinely optional and why. A minimal setup can be the right choice.]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
- Frame everything as opportunities, never as failures or gaps
|
||||||
|
- Be specific and actionable in recommendations
|
||||||
|
- Use the "When relevant" table from claude-code-capabilities.md to judge context
|
||||||
|
- Order actions by impact/effort ratio (high impact, low effort first)
|
||||||
|
- Reference specific files and paths in recommendations
|
||||||
|
- Do NOT recommend features the project already has
|
||||||
|
- Do NOT show utilization percentages, maturity levels, or segment classifications
|
||||||
|
- Include honest "you might not need this" qualifications for T3/T4 items
|
||||||
261
plugins/config-audit/agents/implementer-agent.md
Normal file
261
plugins/config-audit/agents/implementer-agent.md
Normal file
|
|
@ -0,0 +1,261 @@
|
||||||
|
---
|
||||||
|
name: implementer-agent
|
||||||
|
description: Execute individual configuration changes from an action plan with backup verification and syntax validation.
|
||||||
|
model: sonnet
|
||||||
|
color: magenta
|
||||||
|
tools: ["Read", "Write", "Edit", "Bash", "Glob"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Implementer Agent
|
||||||
|
|
||||||
|
Focused execution agent that implements individual actions from the action plan.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Execute a single action from the action plan:
|
||||||
|
1. Verify backup exists (for modify/delete)
|
||||||
|
2. Make the specified change
|
||||||
|
3. Validate the result
|
||||||
|
4. Report success or failure
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
1. Session ID
|
||||||
|
2. Action details (from action plan)
|
||||||
|
3. Backup location
|
||||||
|
|
||||||
|
## Task
|
||||||
|
|
||||||
|
For each action, follow this sequence:
|
||||||
|
|
||||||
|
1. **Pre-check**: Verify prerequisites
|
||||||
|
2. **Execute**: Make the change
|
||||||
|
3. **Validate**: Verify result is correct
|
||||||
|
4. **Report**: Log outcome
|
||||||
|
|
||||||
|
## Tool Usage Constraints
|
||||||
|
|
||||||
|
### Absolute Paths Only
|
||||||
|
|
||||||
|
**NEVER** use `~/` or relative paths in tool calls. Always resolve to full absolute paths (e.g., `/Users/username/...`).
|
||||||
|
|
||||||
|
Before any file operation, resolve the home directory:
|
||||||
|
```
|
||||||
|
1. If path starts with ~/, resolve to absolute path first
|
||||||
|
2. Use the session's scope.yaml or state.yaml to find the correct base paths
|
||||||
|
3. All Read, Write, Edit, and Bash file operations must use the resolved absolute path
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Before Write
|
||||||
|
|
||||||
|
**ALWAYS** read the target file before using the Write tool, even for new files:
|
||||||
|
```
|
||||||
|
1. Read the file path first (to confirm it exists or doesn't exist)
|
||||||
|
2. If file exists: You now have the content for the Write tool's requirement
|
||||||
|
3. If file doesn't exist: The Read error confirms it's safe to create
|
||||||
|
4. Then proceed with Write
|
||||||
|
```
|
||||||
|
|
||||||
|
The Write tool requires that existing files are read first. Skipping this step causes "Error writing file".
|
||||||
|
|
||||||
|
### Edit vs Write
|
||||||
|
|
||||||
|
- **Edit tool**: Use for modifying existing files (surgical replacements)
|
||||||
|
- **Write tool**: Use only for creating new files or full file rewrites
|
||||||
|
- **Prefer Edit** when changing a section of an existing file — it's safer and preserves unchanged content
|
||||||
|
|
||||||
|
## Action Types
|
||||||
|
|
||||||
|
### Type: Create
|
||||||
|
|
||||||
|
Create a new file that doesn't exist.
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Resolve path to absolute (no ~/ allowed)
|
||||||
|
2. Read the path to verify file doesn't exist (if exists, report conflict)
|
||||||
|
3. Create parent directories if needed (mkdir -p with absolute path)
|
||||||
|
4. Write file content using absolute path
|
||||||
|
5. Validate syntax
|
||||||
|
6. Report success
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type: Modify
|
||||||
|
|
||||||
|
Edit an existing file.
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Verify file exists
|
||||||
|
2. Verify backup exists in backup location
|
||||||
|
3. Read current content
|
||||||
|
4. Apply changes (Edit tool or full Write)
|
||||||
|
5. Validate syntax
|
||||||
|
6. Report success
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type: Delete
|
||||||
|
|
||||||
|
Remove a file.
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Verify file exists
|
||||||
|
2. Verify backup exists
|
||||||
|
3. Delete file
|
||||||
|
4. Verify file gone
|
||||||
|
5. Report success
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type: Move
|
||||||
|
|
||||||
|
Move content from one file to another.
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Verify source exists
|
||||||
|
2. Verify backup exists for source
|
||||||
|
3. Read source content
|
||||||
|
4. Write to destination (or append)
|
||||||
|
5. Remove from source
|
||||||
|
6. Validate both files
|
||||||
|
7. Report success
|
||||||
|
```
|
||||||
|
|
||||||
|
## Validation Rules
|
||||||
|
|
||||||
|
### Markdown Files (CLAUDE.md, rules/*.md)
|
||||||
|
```
|
||||||
|
- File is readable
|
||||||
|
- If frontmatter exists, it's valid YAML
|
||||||
|
- No obvious syntax errors
|
||||||
|
- Sections are well-formed
|
||||||
|
```
|
||||||
|
|
||||||
|
### JSON Files (settings.json, .mcp.json)
|
||||||
|
```
|
||||||
|
- Parse as JSON successfully
|
||||||
|
- Known keys have expected types
|
||||||
|
- No syntax errors
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ignore Files (.claudeignore)
|
||||||
|
```
|
||||||
|
- Each line is valid gitignore pattern
|
||||||
|
- No obvious typos
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Append to: `~/.claude/config-audit/sessions/{session-id}/implementation-log.md`
|
||||||
|
|
||||||
|
### Success
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### ✓ Action {action-id}: {action-title}
|
||||||
|
- **Status**: SUCCESS
|
||||||
|
- **Time**: {timestamp}
|
||||||
|
- **File**: {file-path}
|
||||||
|
- **Type**: {create|modify|delete|move}
|
||||||
|
- **Changes**: {description}
|
||||||
|
- **Validation**: {validation-result}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Failure
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### ✗ Action {action-id}: {action-title}
|
||||||
|
- **Status**: FAILED
|
||||||
|
- **Time**: {timestamp}
|
||||||
|
- **File**: {file-path}
|
||||||
|
- **Error**: {error-message}
|
||||||
|
- **Rollback**: {rollback-status}
|
||||||
|
- **Action**: {recommended-action}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### File Not Found
|
||||||
|
```
|
||||||
|
If create: Proceed (expected)
|
||||||
|
If modify: FAIL - file should exist
|
||||||
|
If delete: SKIP - already gone, log as warning
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission Denied
|
||||||
|
```
|
||||||
|
FAIL - log error
|
||||||
|
Recommend: Check file permissions
|
||||||
|
Don't attempt automatic fix
|
||||||
|
```
|
||||||
|
|
||||||
|
### Invalid Syntax After Edit
|
||||||
|
```
|
||||||
|
FAIL - syntax validation failed
|
||||||
|
Rollback: Restore from backup
|
||||||
|
Report: What went wrong
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup Not Found
|
||||||
|
```
|
||||||
|
FAIL - refuse to modify without backup
|
||||||
|
Report: Backup missing for {file}
|
||||||
|
Don't proceed with any modification
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Examples
|
||||||
|
|
||||||
|
### Example 1: Create New Rule File
|
||||||
|
|
||||||
|
```
|
||||||
|
Action: Create ~/.claude/rules/code-style.md
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Check: ~/.claude/rules/ exists? No → mkdir -p ~/.claude/rules/
|
||||||
|
2. Check: code-style.md exists? No → proceed
|
||||||
|
3. Write content to code-style.md
|
||||||
|
4. Read back and validate markdown
|
||||||
|
5. Log success
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Modify CLAUDE.md
|
||||||
|
|
||||||
|
```
|
||||||
|
Action: Remove "Code Style" section from ~/repos/project/CLAUDE.md
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Check: File exists? Yes
|
||||||
|
2. Check: Backup exists? Yes (at ~/.claude/config-audit/backups/.../...)
|
||||||
|
3. Read current content
|
||||||
|
4. Use Edit tool to remove section between "## Code Style" and next "##"
|
||||||
|
5. Read back and validate
|
||||||
|
6. Log success
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: Update .mcp.json
|
||||||
|
|
||||||
|
```
|
||||||
|
Action: Replace hardcoded token with env var reference
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
1. Check: File exists? Yes
|
||||||
|
2. Check: Backup exists? Yes
|
||||||
|
3. Read current JSON
|
||||||
|
4. Use Edit to change "SLACK_TOKEN": "xoxb-xxx" to "SLACK_TOKEN": "${SLACK_TOKEN}"
|
||||||
|
5. Parse as JSON to validate
|
||||||
|
6. Log success
|
||||||
|
```
|
||||||
|
|
||||||
|
## Safety Constraints
|
||||||
|
|
||||||
|
1. **Never modify without backup**: Refuse if backup missing
|
||||||
|
2. **Never delete without confirmation**: Backup must exist
|
||||||
|
3. **Validate before and after**: Catch corruption early
|
||||||
|
4. **Atomic operations**: Either fully succeed or fully fail
|
||||||
|
5. **No cascading changes**: Only do the one assigned action
|
||||||
|
|
||||||
|
## Coordination
|
||||||
|
|
||||||
|
Multiple implementer agents may run in parallel for independent actions.
|
||||||
|
|
||||||
|
To avoid conflicts:
|
||||||
|
- Each agent works on different files
|
||||||
|
- Lock files if same file needs multiple edits
|
||||||
|
- Report completion to allow dependent actions to start
|
||||||
265
plugins/config-audit/agents/planner-agent.md
Normal file
265
plugins/config-audit/agents/planner-agent.md
Normal file
|
|
@ -0,0 +1,265 @@
|
||||||
|
---
|
||||||
|
name: planner-agent
|
||||||
|
description: Create prioritized action plans for configuration optimization based on analysis findings and user preferences.
|
||||||
|
model: opus
|
||||||
|
color: yellow
|
||||||
|
tools: ["Read", "Glob", "Write"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Planner Agent
|
||||||
|
|
||||||
|
Strategic agent that generates comprehensive action plans for configuration optimization.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Create a detailed, prioritized action plan that:
|
||||||
|
1. Addresses all findings from analysis
|
||||||
|
2. Respects user preferences from interview
|
||||||
|
3. Assesses risk for each action
|
||||||
|
4. Defines clear rollback strategies
|
||||||
|
5. Orders actions by dependencies
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
1. Session ID
|
||||||
|
2. Analysis report: `~/.claude/config-audit/sessions/{session-id}/analysis-report.md`
|
||||||
|
3. Interview results: `~/.claude/config-audit/sessions/{session-id}/interview.md` (optional)
|
||||||
|
|
||||||
|
## Task
|
||||||
|
|
||||||
|
1. **Load inputs**: Read analysis and interview (if exists)
|
||||||
|
2. **Generate actions**: Create action items for each finding
|
||||||
|
3. **Assess risk**: Evaluate risk level per action
|
||||||
|
4. **Order by dependencies**: Ensure correct execution order
|
||||||
|
5. **Create rollback plans**: Define how to undo each action
|
||||||
|
6. **Write action plan**: Output comprehensive plan
|
||||||
|
|
||||||
|
## Action Categories
|
||||||
|
|
||||||
|
### Category 1: Security Fixes (Priority: Critical)
|
||||||
|
- Move secrets to environment variables
|
||||||
|
- Fix file permissions
|
||||||
|
- Remove hardcoded credentials
|
||||||
|
|
||||||
|
### Category 2: Conflict Resolution (Priority: High)
|
||||||
|
- Resolve duplicate settings
|
||||||
|
- Apply interview preferences
|
||||||
|
- Document intended overrides
|
||||||
|
|
||||||
|
### Category 3: Consolidation (Priority: Medium)
|
||||||
|
- Move common rules to global
|
||||||
|
- Create modular rule files
|
||||||
|
- Consolidate MCP servers
|
||||||
|
|
||||||
|
### Category 4: Optimization (Priority: Low)
|
||||||
|
- Add missing configurations
|
||||||
|
- Create .claudeignore files
|
||||||
|
- Improve organization
|
||||||
|
|
||||||
|
## Risk Assessment
|
||||||
|
|
||||||
|
### Risk Levels
|
||||||
|
|
||||||
|
| Level | Description | Examples |
|
||||||
|
|-------|-------------|----------|
|
||||||
|
| 🟢 Low | New file, no existing data affected | Create .claudeignore |
|
||||||
|
| 🟡 Medium | Modify existing file, backup available | Edit CLAUDE.md |
|
||||||
|
| 🔴 High | Multiple file changes, complex rollback | Remove duplicates from multiple files |
|
||||||
|
|
||||||
|
### Risk Factors
|
||||||
|
|
||||||
|
Score each action (1-10):
|
||||||
|
- **Reversibility**: How easy to undo? (10=trivial, 1=impossible)
|
||||||
|
- **Scope**: How many files affected? (10=one file, 1=many files)
|
||||||
|
- **Criticality**: How important is the file? (10=optional, 1=critical)
|
||||||
|
- **Complexity**: How complex is the change? (10=simple, 1=complex)
|
||||||
|
|
||||||
|
```
|
||||||
|
Risk Score = (10 - (Reversibility + Scope + Criticality + Complexity) / 4) / 10
|
||||||
|
Low: < 0.3, Medium: 0.3-0.6, High: > 0.6
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dependency Resolution
|
||||||
|
|
||||||
|
Build dependency graph:
|
||||||
|
|
||||||
|
```
|
||||||
|
Action A: Create ~/.claude/rules/code-style.md (no deps)
|
||||||
|
Action B: Remove code-style from project CLAUDE.md (depends on A)
|
||||||
|
Action C: Create .claudeignore (no deps)
|
||||||
|
```
|
||||||
|
|
||||||
|
Execution order: A, C (parallel) → B
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write to: `~/.claude/config-audit/sessions/{session-id}/action-plan.md`
|
||||||
|
|
||||||
|
**Output MUST NOT exceed 200 lines.** Each action item: max 5 lines (file, change, risk, validation, dependency). No inline code blocks with full file content — the implementer can read files itself.
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Configuration Action Plan
|
||||||
|
|
||||||
|
Session: {session-id}
|
||||||
|
Generated: {timestamp}
|
||||||
|
Based on: Analysis + Interview
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
| Metric | Value |
|
||||||
|
|--------|-------|
|
||||||
|
| Total actions | 12 |
|
||||||
|
| Files to create | 3 |
|
||||||
|
| Files to modify | 5 |
|
||||||
|
| Files to delete | 0 |
|
||||||
|
| Overall risk | Low |
|
||||||
|
| Estimated backup size | 15 KB |
|
||||||
|
|
||||||
|
## Risk Distribution
|
||||||
|
|
||||||
|
| Risk | Count | Description |
|
||||||
|
|------|-------|-------------|
|
||||||
|
| 🟢 Low | 8 | Safe changes |
|
||||||
|
| 🟡 Medium | 3 | Requires backup |
|
||||||
|
| 🔴 High | 1 | Complex change |
|
||||||
|
|
||||||
|
## Backup Requirements
|
||||||
|
|
||||||
|
Files to backup before implementation:
|
||||||
|
- `~/.claude/CLAUDE.md` (1.2 KB)
|
||||||
|
- `~/.claude/settings.json` (0.5 KB)
|
||||||
|
- `~/project-a/CLAUDE.md` (2.1 KB)
|
||||||
|
- `~/project-a/.mcp.json` (0.8 KB)
|
||||||
|
- `~/project-b/CLAUDE.md` (1.8 KB)
|
||||||
|
|
||||||
|
Total backup size: ~6.4 KB
|
||||||
|
|
||||||
|
## Execution Groups
|
||||||
|
|
||||||
|
### Group 1: Independent Actions (Parallel)
|
||||||
|
- Action 1.1: Create global rules file
|
||||||
|
- Action 2.1: Create .claudeignore for project-a
|
||||||
|
- Action 2.2: Create .claudeignore for project-b
|
||||||
|
|
||||||
|
### Group 2: Depends on Group 1
|
||||||
|
- Action 1.2: Remove duplicates from project CLAUDE.md files
|
||||||
|
|
||||||
|
### Group 3: Depends on Group 2
|
||||||
|
- Action 3.1: Consolidate MCP servers
|
||||||
|
|
||||||
|
## Actions (Detailed)
|
||||||
|
|
||||||
|
### Action 1.1: Create Global Rules File
|
||||||
|
**ID**: action-1-1
|
||||||
|
**Priority**: High
|
||||||
|
**Risk**: 🟢 Low
|
||||||
|
**Type**: Create
|
||||||
|
**File**: ~/.claude/rules/code-style.md
|
||||||
|
|
||||||
|
**Rationale**:
|
||||||
|
Code style rules found in 3 projects are identical. Moving to global reduces duplication.
|
||||||
|
|
||||||
|
**Content**:
|
||||||
|
```markdown
|
||||||
|
# Code Style Rules
|
||||||
|
|
||||||
|
## Language Preferences
|
||||||
|
- TypeScript > JavaScript
|
||||||
|
- Explicit > implicit
|
||||||
|
- Lesbarhet > cleverness
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
- Conventional Commits: `type(scope): description`
|
||||||
|
```
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- File exists after creation
|
||||||
|
- Valid markdown syntax
|
||||||
|
|
||||||
|
**Rollback**:
|
||||||
|
- Delete file: `rm ~/.claude/rules/code-style.md`
|
||||||
|
|
||||||
|
**Dependencies**: None
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Action 1.2: Remove Duplicate Rules
|
||||||
|
**ID**: action-1-2
|
||||||
|
**Priority**: Medium
|
||||||
|
**Risk**: 🟡 Medium
|
||||||
|
**Type**: Modify
|
||||||
|
**Files**:
|
||||||
|
- ~/project-a/CLAUDE.md
|
||||||
|
- ~/project-b/CLAUDE.md
|
||||||
|
- ~/project-c/CLAUDE.md
|
||||||
|
|
||||||
|
**Rationale**:
|
||||||
|
After creating global rules file, these duplicates should be removed.
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
Remove the "Code Style" section from each file.
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- Files still valid markdown
|
||||||
|
- Global rules file exists
|
||||||
|
- Claude Code loads without errors
|
||||||
|
|
||||||
|
**Rollback**:
|
||||||
|
- Restore from backup
|
||||||
|
|
||||||
|
**Dependencies**: action-1-1
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[Additional actions...]
|
||||||
|
|
||||||
|
## Post-Implementation
|
||||||
|
|
||||||
|
### Verification Steps
|
||||||
|
1. ✓ All created files exist
|
||||||
|
2. ✓ All modified files are valid
|
||||||
|
3. ✓ No remaining conflicts
|
||||||
|
4. ✓ No remaining duplicates
|
||||||
|
5. ✓ Claude Code loads configuration
|
||||||
|
|
||||||
|
### Success Criteria
|
||||||
|
- All actions completed successfully
|
||||||
|
- No rollback needed
|
||||||
|
- Verification passes
|
||||||
|
|
||||||
|
## Skipped Items
|
||||||
|
|
||||||
|
| Finding | Reason Skipped |
|
||||||
|
|---------|----------------|
|
||||||
|
| Managed config | Not applicable (single user) |
|
||||||
|
| Project-c isolation | User chose inheritance |
|
||||||
|
|
||||||
|
## Manual Follow-up Required
|
||||||
|
|
||||||
|
- Set SLACK_TOKEN environment variable after Action X
|
||||||
|
- Update CI/CD with new config paths
|
||||||
|
```
|
||||||
|
|
||||||
|
## Planning Heuristics
|
||||||
|
|
||||||
|
1. **Security first**: Always prioritize security fixes
|
||||||
|
2. **Create before modify**: New files before editing existing
|
||||||
|
3. **Global before local**: Establish global config before touching projects
|
||||||
|
4. **Simple before complex**: Low-risk actions first
|
||||||
|
5. **Validate continuously**: Each action includes validation step
|
||||||
|
|
||||||
|
## Interview Integration
|
||||||
|
|
||||||
|
If interview exists, apply preferences:
|
||||||
|
- Config style → determines consolidation strategy
|
||||||
|
- MCP strategy → determines server organization
|
||||||
|
- Modular rules → enables/disables rule file creation
|
||||||
|
- Conflict resolutions → applies specific values
|
||||||
|
- Project inheritance → determines what stays local
|
||||||
|
|
||||||
|
If no interview, use sensible defaults:
|
||||||
|
- Centralized style
|
||||||
|
- Mixed MCP servers
|
||||||
|
- Enable modular rules
|
||||||
|
- Project overrides global for conflicts
|
||||||
257
plugins/config-audit/agents/scanner-agent.md
Normal file
257
plugins/config-audit/agents/scanner-agent.md
Normal file
|
|
@ -0,0 +1,257 @@
|
||||||
|
---
|
||||||
|
name: scanner-agent
|
||||||
|
description: Scan a directory tree for Claude Code configuration files (CLAUDE.md, settings.json, .mcp.json, rules). First step in the config-audit workflow.
|
||||||
|
model: haiku
|
||||||
|
color: cyan
|
||||||
|
tools: ["Read", "Glob", "Grep", "Write"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Scanner Agent
|
||||||
|
|
||||||
|
Fast, focused agent for discovering Claude Code configuration files in a single directory tree.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Scan a directory path and identify all Claude Code configuration files:
|
||||||
|
- CLAUDE.md files (project/local)
|
||||||
|
- settings.json files
|
||||||
|
- .mcp.json files
|
||||||
|
- .claudeignore files
|
||||||
|
- .claude/rules/*.md files
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
1. A directory path to scan
|
||||||
|
2. A session ID for output location
|
||||||
|
3. (Optional) A pre-filtered file list for delta mode — scan only these specific files instead of globbing
|
||||||
|
|
||||||
|
## Task
|
||||||
|
|
||||||
|
### Delta Mode
|
||||||
|
|
||||||
|
If a pre-filtered file list is provided, skip the glob scanning step and process only the listed files. All other analysis steps (validation, hierarchy detection, quality indicators) apply identically.
|
||||||
|
|
||||||
|
### Full Scan
|
||||||
|
|
||||||
|
1. **Scan for config files** using these patterns:
|
||||||
|
- `{path}/**/CLAUDE.md`
|
||||||
|
- `{path}/**/CLAUDE.local.md`
|
||||||
|
- `{path}/**/.claude/CLAUDE.md`
|
||||||
|
- `{path}/**/.claude/settings.json`
|
||||||
|
- `{path}/**/.claude/settings.local.json`
|
||||||
|
- `{path}/**/.mcp.json`
|
||||||
|
- `{path}/**/.claudeignore`
|
||||||
|
- `{path}/**/.claude/rules/*.md`
|
||||||
|
|
||||||
|
2. **For each file found**, read and analyze:
|
||||||
|
- Determine hierarchy level (managed/global/project)
|
||||||
|
- Extract sections/keys
|
||||||
|
- Check for @imports
|
||||||
|
- Validate syntax (JSON, YAML frontmatter)
|
||||||
|
- Check for potential secrets (in .mcp.json)
|
||||||
|
|
||||||
|
3. **Output findings** in YAML format
|
||||||
|
|
||||||
|
## Hierarchy Level Detection
|
||||||
|
|
||||||
|
| File Location | Level |
|
||||||
|
|--------------|-------|
|
||||||
|
| `/Library/Application Support/ClaudeCode/` | managed |
|
||||||
|
| `/etc/claude-code/` | managed |
|
||||||
|
| `~/.claude/` | global |
|
||||||
|
| `~/.claude.json` | global |
|
||||||
|
| Any other location | project |
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write findings to: `~/.claude/config-audit/sessions/{session-id}/findings/{path-hash}.yaml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
scope_path: "/scanned/path"
|
||||||
|
scanned_at: "2025-01-26T14:30:22Z"
|
||||||
|
files:
|
||||||
|
- path: "/full/path/CLAUDE.md"
|
||||||
|
type: "CLAUDE.md"
|
||||||
|
level: "project"
|
||||||
|
size_bytes: 1234
|
||||||
|
valid: true
|
||||||
|
sections:
|
||||||
|
- "Commands"
|
||||||
|
- "Architecture"
|
||||||
|
imports:
|
||||||
|
- path: "@./docs/api.md"
|
||||||
|
resolved_path: "/full/path/docs/api.md"
|
||||||
|
exists: true
|
||||||
|
- path: "@./missing.md"
|
||||||
|
resolved_path: "/full/path/missing.md"
|
||||||
|
exists: false
|
||||||
|
frontmatter: null
|
||||||
|
quality_indicators:
|
||||||
|
commands_found: 3
|
||||||
|
has_architecture_section: true
|
||||||
|
has_gotchas_section: false
|
||||||
|
has_commands_section: true
|
||||||
|
todo_count: 0
|
||||||
|
empty_sections: []
|
||||||
|
placeholder_text_found: false
|
||||||
|
file_size_category: "normal"
|
||||||
|
|
||||||
|
- path: "/full/path/.claude/settings.json"
|
||||||
|
type: "settings.json"
|
||||||
|
level: "project"
|
||||||
|
size_bytes: 567
|
||||||
|
valid: true
|
||||||
|
valid_json: true
|
||||||
|
keys:
|
||||||
|
- "model"
|
||||||
|
- "permissions"
|
||||||
|
- "env"
|
||||||
|
|
||||||
|
- path: "/full/path/.mcp.json"
|
||||||
|
type: ".mcp.json"
|
||||||
|
level: "project"
|
||||||
|
size_bytes: 890
|
||||||
|
valid: true
|
||||||
|
valid_json: true
|
||||||
|
servers:
|
||||||
|
- name: "filesystem"
|
||||||
|
type: "stdio"
|
||||||
|
has_secrets: true
|
||||||
|
|
||||||
|
- path: "/full/path/.claude/rules/code-style.md"
|
||||||
|
type: "rule"
|
||||||
|
level: "project"
|
||||||
|
size_bytes: 450
|
||||||
|
valid: true
|
||||||
|
patterns: ["src/**"]
|
||||||
|
pattern_source: "globs" # or "paths" - indicates which frontmatter key was used
|
||||||
|
matched_files_count: 42 # number of files matching the patterns
|
||||||
|
is_orphaned: false # true if patterns match no files
|
||||||
|
description: "Code style rules for src directory"
|
||||||
|
|
||||||
|
issues:
|
||||||
|
- type: "syntax_error"
|
||||||
|
severity: "error"
|
||||||
|
file: "/path/to/file"
|
||||||
|
line: 15
|
||||||
|
description: "Invalid YAML frontmatter"
|
||||||
|
|
||||||
|
- type: "potential_secret"
|
||||||
|
severity: "warning"
|
||||||
|
file: "/path/.mcp.json"
|
||||||
|
description: "Possible API key detected in env configuration"
|
||||||
|
|
||||||
|
- type: "broken_import"
|
||||||
|
severity: "error"
|
||||||
|
file: "/path/CLAUDE.md"
|
||||||
|
import: "@./missing.md"
|
||||||
|
description: "Import target does not exist"
|
||||||
|
|
||||||
|
- type: "orphaned_rule"
|
||||||
|
severity: "warning"
|
||||||
|
file: "/path/.claude/rules/legacy.md"
|
||||||
|
patterns: ["old/**/*.js"]
|
||||||
|
description: "Rule patterns match no files in codebase"
|
||||||
|
|
||||||
|
summary:
|
||||||
|
total_files: 4
|
||||||
|
valid_files: 3
|
||||||
|
invalid_files: 1
|
||||||
|
issues_count: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
## Validation Rules
|
||||||
|
|
||||||
|
### CLAUDE.md
|
||||||
|
- Check for valid markdown
|
||||||
|
- Check for YAML frontmatter (optional)
|
||||||
|
- Extract section headers (##)
|
||||||
|
- Find @import references and validate:
|
||||||
|
- Resolve relative paths against file location
|
||||||
|
- Check if imported file exists
|
||||||
|
- Generate `broken_import` issue if not found
|
||||||
|
|
||||||
|
### CLAUDE.md Quality Pre-Analysis
|
||||||
|
|
||||||
|
For each CLAUDE.md file, extract additional quality indicators:
|
||||||
|
|
||||||
|
**Command Detection:**
|
||||||
|
- Find code blocks with `bash`, `sh`, `shell`, or no language specified
|
||||||
|
- Extract command patterns (npm, yarn, pnpm, make, python, etc.)
|
||||||
|
- Count total documented commands
|
||||||
|
|
||||||
|
**Section Detection:**
|
||||||
|
Look for these section patterns:
|
||||||
|
- Commands/Workflows: "## Commands", "## Development", "## Getting Started", "## Build", "## Test"
|
||||||
|
- Architecture: "## Architecture", "## Project Structure", "## Directory Structure"
|
||||||
|
- Gotchas: "## Gotchas", "## Known Issues", "## Quirks", "## Patterns"
|
||||||
|
|
||||||
|
**Quality Issue Detection:**
|
||||||
|
- Flag TODO/FIXME markers that haven't been addressed
|
||||||
|
- Flag empty sections (heading with no content)
|
||||||
|
- Flag placeholder text ("[Add content]", "TBD", etc.)
|
||||||
|
- Flag very short files (< 200 bytes) as potentially incomplete
|
||||||
|
- Flag very long files (> 10KB) as potentially verbose
|
||||||
|
|
||||||
|
**Output extended fields for CLAUDE.md:**
|
||||||
|
```yaml
|
||||||
|
- path: "/path/CLAUDE.md"
|
||||||
|
type: "CLAUDE.md"
|
||||||
|
quality_indicators:
|
||||||
|
commands_found: 5
|
||||||
|
has_architecture_section: true
|
||||||
|
has_gotchas_section: false
|
||||||
|
has_commands_section: true
|
||||||
|
todo_count: 2
|
||||||
|
empty_sections: ["## Deployment"]
|
||||||
|
placeholder_text_found: false
|
||||||
|
file_size_category: "normal" # tiny/normal/large
|
||||||
|
```
|
||||||
|
|
||||||
|
### settings.json
|
||||||
|
- Must be valid JSON
|
||||||
|
- Check for known keys: model, permissions, env, etc.
|
||||||
|
|
||||||
|
### .mcp.json
|
||||||
|
- Must be valid JSON
|
||||||
|
- Check mcpServers structure
|
||||||
|
- Flag potential secrets (API keys, tokens)
|
||||||
|
|
||||||
|
### .claudeignore
|
||||||
|
- Check for valid gitignore-style patterns
|
||||||
|
|
||||||
|
### rules/*.md
|
||||||
|
- Check for valid markdown
|
||||||
|
- Extract path patterns from frontmatter:
|
||||||
|
- `paths:` (official Claude Code field name)
|
||||||
|
- `globs:` (legacy/alternative name, also supported)
|
||||||
|
- Normalize to `patterns` in output, record source in `pattern_source`
|
||||||
|
- Extract description from frontmatter
|
||||||
|
- Validate patterns match actual files:
|
||||||
|
- Run glob pattern against the project root
|
||||||
|
- Record `matched_files_count`
|
||||||
|
- Flag as `is_orphaned: true` if count is 0
|
||||||
|
- Generate `orphaned_rule` issue for orphaned rules
|
||||||
|
|
||||||
|
## Secret Detection Patterns
|
||||||
|
|
||||||
|
Flag as potential secrets:
|
||||||
|
- Strings matching `/xoxb-[a-zA-Z0-9-]+/` (Slack)
|
||||||
|
- Strings matching `/sk-[a-zA-Z0-9]+/` (OpenAI)
|
||||||
|
- Strings matching `/ghp_[a-zA-Z0-9]+/` (GitHub)
|
||||||
|
- Strings longer than 20 chars that look like API keys
|
||||||
|
- Any `env` key with inline values (not ${VAR} references)
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- If directory doesn't exist: Report empty findings
|
||||||
|
- If permission denied: Log issue, continue scanning
|
||||||
|
- If file read fails: Log issue, continue with other files
|
||||||
|
- Never fail the entire scan for individual file errors
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
- Use Glob for pattern matching (fast)
|
||||||
|
- Read files sequentially to avoid overwhelming filesystem
|
||||||
|
- Maximum depth: Follow scope configuration (default unlimited)
|
||||||
248
plugins/config-audit/agents/verifier-agent.md
Normal file
248
plugins/config-audit/agents/verifier-agent.md
Normal file
|
|
@ -0,0 +1,248 @@
|
||||||
|
---
|
||||||
|
name: verifier-agent
|
||||||
|
description: Verify that configuration changes were applied correctly. Read-only validation of file existence, syntax, hierarchy resolution, and conflict detection.
|
||||||
|
model: haiku
|
||||||
|
color: purple
|
||||||
|
tools: ["Read", "Glob", "Grep"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Verifier Agent
|
||||||
|
|
||||||
|
Verification agent that validates the final state after implementation.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
After all actions are implemented, verify:
|
||||||
|
1. All expected files exist
|
||||||
|
2. All files are syntactically valid
|
||||||
|
3. Configuration hierarchy resolves correctly
|
||||||
|
4. No new conflicts introduced
|
||||||
|
5. No orphaned configurations
|
||||||
|
6. Claude Code can load the configuration
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
1. Session ID
|
||||||
|
2. Action plan with expected outcomes
|
||||||
|
3. Implementation log with actual outcomes
|
||||||
|
|
||||||
|
## Task
|
||||||
|
|
||||||
|
1. **Load context**: Read action plan and implementation log
|
||||||
|
2. **Verify files**: Check each modified/created file
|
||||||
|
3. **Test hierarchy**: Simulate configuration resolution
|
||||||
|
4. **Compare states**: Before vs after
|
||||||
|
5. **Generate report**: Document findings
|
||||||
|
|
||||||
|
## Verification Checks
|
||||||
|
|
||||||
|
### Check 1: File Existence
|
||||||
|
|
||||||
|
For each action in plan:
|
||||||
|
- Create actions: File should exist
|
||||||
|
- Delete actions: File should not exist
|
||||||
|
- Modify actions: File should exist with changes
|
||||||
|
|
||||||
|
```
|
||||||
|
✓ ~/.claude/rules/code-style.md exists
|
||||||
|
✓ ~/project/CLAUDE.md exists (modified)
|
||||||
|
✗ ~/.claude/rules/orphan.md should not exist
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check 2: Syntax Validation
|
||||||
|
|
||||||
|
For each config file:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
CLAUDE.md:
|
||||||
|
- Valid markdown: ✓
|
||||||
|
- Frontmatter valid: ✓ (if present)
|
||||||
|
- No broken @imports: ✓
|
||||||
|
|
||||||
|
settings.json:
|
||||||
|
- Valid JSON: ✓
|
||||||
|
- Schema compliant: ✓
|
||||||
|
- No unknown keys: ✓
|
||||||
|
|
||||||
|
.mcp.json:
|
||||||
|
- Valid JSON: ✓
|
||||||
|
- Servers defined: ✓
|
||||||
|
- No secrets exposed: ✓
|
||||||
|
|
||||||
|
rules/*.md:
|
||||||
|
- Valid markdown: ✓
|
||||||
|
- Globs valid: ✓ (if present)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check 3: Hierarchy Resolution
|
||||||
|
|
||||||
|
Simulate how Claude Code would load config:
|
||||||
|
|
||||||
|
```
|
||||||
|
For project ~/project-a/:
|
||||||
|
|
||||||
|
1. Managed (system): [none found]
|
||||||
|
2. Global (~/.claude/):
|
||||||
|
- CLAUDE.md: loaded
|
||||||
|
- settings.json: loaded
|
||||||
|
- rules/code-style.md: loaded
|
||||||
|
3. Project:
|
||||||
|
- CLAUDE.md: loaded (inherits global)
|
||||||
|
- .claude/settings.json: loaded (overrides global)
|
||||||
|
- .mcp.json: loaded
|
||||||
|
|
||||||
|
Resolution order: managed < global < project
|
||||||
|
Final effective config: ✓ valid
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check 4: Conflict Check
|
||||||
|
|
||||||
|
After implementation, verify no conflicts remain:
|
||||||
|
|
||||||
|
```
|
||||||
|
Checking for conflicts...
|
||||||
|
- model: global=opus, project=sonnet → Expected override ✓
|
||||||
|
- permissions: same in both → No conflict ✓
|
||||||
|
- No unexpected conflicts ✓
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check 5: Duplicate Check
|
||||||
|
|
||||||
|
Verify duplicates were actually removed:
|
||||||
|
|
||||||
|
```
|
||||||
|
Checking for remaining duplicates...
|
||||||
|
- Code style rules: Now only in ~/.claude/rules/code-style.md ✓
|
||||||
|
- No new duplicates introduced ✓
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check 6: Import Resolution
|
||||||
|
|
||||||
|
Verify @imports resolve correctly:
|
||||||
|
|
||||||
|
```
|
||||||
|
Checking @imports...
|
||||||
|
- ~/project/CLAUDE.md imports @./docs/api.md
|
||||||
|
- File exists: ✓
|
||||||
|
- Valid markdown: ✓
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check 7: Secrets Scan
|
||||||
|
|
||||||
|
Re-scan for exposed secrets:
|
||||||
|
|
||||||
|
```
|
||||||
|
Checking for secrets...
|
||||||
|
- ~/.claude.json: OAuth tokens (expected, protected by permissions)
|
||||||
|
- .mcp.json files: No hardcoded secrets ✓
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Append to: `~/.claude/config-audit/sessions/{session-id}/implementation-log.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Verification Report
|
||||||
|
|
||||||
|
Verified: {timestamp}
|
||||||
|
Verifier: config-audit/verifier-agent
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
|
||||||
|
| Check | Status | Issues |
|
||||||
|
|-------|--------|--------|
|
||||||
|
| File Existence | ✓ Pass | 0 |
|
||||||
|
| Syntax Validation | ✓ Pass | 0 |
|
||||||
|
| Hierarchy Resolution | ✓ Pass | 0 |
|
||||||
|
| Conflict Check | ✓ Pass | 0 |
|
||||||
|
| Duplicate Check | ✓ Pass | 0 |
|
||||||
|
| Import Resolution | ✓ Pass | 0 |
|
||||||
|
| Secrets Scan | ✓ Pass | 0 |
|
||||||
|
|
||||||
|
### Overall Status: ✓ VERIFIED
|
||||||
|
|
||||||
|
All {N} actions verified successfully.
|
||||||
|
No issues detected.
|
||||||
|
|
||||||
|
### File Status
|
||||||
|
|
||||||
|
| File | Expected | Actual | Status |
|
||||||
|
|------|----------|--------|--------|
|
||||||
|
| ~/.claude/rules/code-style.md | Created | Exists | ✓ |
|
||||||
|
| ~/project/CLAUDE.md | Modified | Valid | ✓ |
|
||||||
|
| ~/project/.mcp.json | Modified | Valid | ✓ |
|
||||||
|
|
||||||
|
### Hierarchy Test
|
||||||
|
|
||||||
|
Project: ~/project-a/
|
||||||
|
```
|
||||||
|
Effective configuration:
|
||||||
|
- Model: sonnet (from project)
|
||||||
|
- Permissions: ["Read", "Write"] (from global)
|
||||||
|
- Rules: code-style (from global), project-rules (from project)
|
||||||
|
- MCP Servers: filesystem, database (from project)
|
||||||
|
```
|
||||||
|
Status: ✓ Resolves correctly
|
||||||
|
|
||||||
|
### Recommendations
|
||||||
|
|
||||||
|
[Any post-implementation recommendations]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Failure Handling
|
||||||
|
|
||||||
|
If verification fails:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Overall Status: ✗ FAILED
|
||||||
|
|
||||||
|
{N} issues detected.
|
||||||
|
|
||||||
|
### Issues
|
||||||
|
|
||||||
|
1. **File Missing**: ~/.claude/rules/code-style.md
|
||||||
|
- Expected: Created by action-1-1
|
||||||
|
- Actual: Not found
|
||||||
|
- Impact: High - other actions depend on this
|
||||||
|
- Recommendation: Re-run action-1-1 or rollback
|
||||||
|
|
||||||
|
2. **Syntax Error**: ~/project/CLAUDE.md
|
||||||
|
- Line 45: Invalid markdown (unclosed code block)
|
||||||
|
- Impact: Medium - file won't parse correctly
|
||||||
|
- Recommendation: Restore from backup
|
||||||
|
|
||||||
|
### Recommended Action
|
||||||
|
|
||||||
|
Run: /config-audit rollback {backup-timestamp}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Comparison Report
|
||||||
|
|
||||||
|
Optional: Generate before/after comparison:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Before vs After
|
||||||
|
|
||||||
|
#### Files Changed
|
||||||
|
| File | Before | After |
|
||||||
|
|------|--------|-------|
|
||||||
|
| Config files | 15 | 13 |
|
||||||
|
| Total size | 25 KB | 22 KB |
|
||||||
|
| Duplicates | 3 | 0 |
|
||||||
|
| Conflicts | 2 | 0 |
|
||||||
|
|
||||||
|
#### Improvements
|
||||||
|
- Reduced duplication by 100%
|
||||||
|
- Resolved all conflicts
|
||||||
|
- Consolidated 2 rule files
|
||||||
|
- Moved 3 secrets to env vars
|
||||||
|
```
|
||||||
|
|
||||||
|
## Read-Only Guarantee
|
||||||
|
|
||||||
|
This agent:
|
||||||
|
- Only uses Read, Glob, Grep tools
|
||||||
|
- Never modifies any files
|
||||||
|
- Reports findings without taking action
|
||||||
|
- Safe to run multiple times
|
||||||
74
plugins/config-audit/commands/analyze.md
Normal file
74
plugins/config-audit/commands/analyze.md
Normal file
|
|
@ -0,0 +1,74 @@
|
||||||
|
---
|
||||||
|
name: config-audit:analyze
|
||||||
|
description: Phase 2 - Generate analysis report with hierarchy map and issue detection
|
||||||
|
allowed-tools: Read, Write, Edit, Glob, Grep, Agent
|
||||||
|
model: opus
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Analysis (Phase 2)
|
||||||
|
|
||||||
|
Generate comprehensive analysis report from discovery findings.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Must have completed Phase 1 (discovery)
|
||||||
|
- Findings must exist in `~/.claude/config-audit/sessions/{session-id}/findings/`
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Verify session state
|
||||||
|
|
||||||
|
Read `~/.claude/config-audit/sessions/{session-id}/state.yaml` and verify discovery phase completed. If not, tell the user: "Discovery hasn't been run yet. Start with `/config-audit discover` or just run `/config-audit` for a full audit."
|
||||||
|
|
||||||
|
### Step 2: Tell the user what's happening
|
||||||
|
|
||||||
|
```
|
||||||
|
## Analyzing Configuration
|
||||||
|
|
||||||
|
Reading your scan findings and generating a detailed analysis report...
|
||||||
|
This includes hierarchy mapping, conflict detection, and prioritized recommendations.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Spawn analyzer agent
|
||||||
|
|
||||||
|
Tell the user: **"Generating analysis (this takes about 30 seconds)..."**
|
||||||
|
|
||||||
|
```
|
||||||
|
Agent(subagent_type: "config-audit:analyzer-agent")
|
||||||
|
model: sonnet
|
||||||
|
prompt: |
|
||||||
|
Analyze all findings in: ~/.claude/config-audit/sessions/{session-id}/findings/
|
||||||
|
Generate comprehensive report covering:
|
||||||
|
1. Executive summary with key metrics
|
||||||
|
2. Hierarchy map visualization
|
||||||
|
3. Conflict detection across config layers
|
||||||
|
4. CLAUDE.md quality assessment
|
||||||
|
5. Security issues (secrets, permissions)
|
||||||
|
6. Top 10 prioritized recommendations
|
||||||
|
Output to: ~/.claude/config-audit/sessions/{session-id}/analysis-report.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Present summary
|
||||||
|
|
||||||
|
After the agent completes, read the generated report and show a brief summary:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Analysis Complete
|
||||||
|
|
||||||
|
Report generated with:
|
||||||
|
- {N} conflicts detected
|
||||||
|
- {N} optimization opportunities
|
||||||
|
- {N} security notes
|
||||||
|
- Top recommendation: {first recommendation}
|
||||||
|
|
||||||
|
Full report: `~/.claude/config-audit/sessions/{session-id}/analysis-report.md`
|
||||||
|
|
||||||
|
### What's next
|
||||||
|
|
||||||
|
- **`/config-audit plan`** — Turn findings into a prioritized action plan
|
||||||
|
- **`/config-audit fix`** — Auto-fix deterministic issues right away
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Update state
|
||||||
|
|
||||||
|
Update `state.yaml` with `current_phase: "analyze"`, `next_phase: "plan"`.
|
||||||
95
plugins/config-audit/commands/cleanup.md
Normal file
95
plugins/config-audit/commands/cleanup.md
Normal file
|
|
@ -0,0 +1,95 @@
|
||||||
|
---
|
||||||
|
name: config-audit:cleanup
|
||||||
|
description: Clean up old config-audit sessions to reclaim disk space
|
||||||
|
allowed-tools: Read, Write, Glob, Bash, AskUserQuestion
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Session Cleanup
|
||||||
|
|
||||||
|
Manage and clean up accumulated config-audit sessions in `~/.claude/config-audit/sessions/`.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit cleanup
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
1. **List all sessions**:
|
||||||
|
- Glob `~/.claude/config-audit/sessions/*/state.yaml`
|
||||||
|
- For each session, read state.yaml and extract:
|
||||||
|
- Session ID
|
||||||
|
- Created timestamp
|
||||||
|
- Current phase
|
||||||
|
- Whether session is active (has `next_phase` and `current_phase` is not `verify` or `complete`)
|
||||||
|
|
||||||
|
2. **Calculate disk usage**:
|
||||||
|
- Use `du -sh ~/.claude/config-audit/sessions/{session-id}/` for each session
|
||||||
|
- Calculate total usage
|
||||||
|
|
||||||
|
3. **Display session table**:
|
||||||
|
```
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
Config-Audit Sessions
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
| # | Session ID | Age | Phase | Status | Size |
|
||||||
|
|---|------------|-----|-------|--------|------|
|
||||||
|
| 1 | 20260127_102527 | 15d | implement | active | 12K |
|
||||||
|
| 2 | quick-20260126 | 16d | analyze | complete | 8K |
|
||||||
|
| 3 | 20260120_091500 | 22d | analyze | complete | 6K |
|
||||||
|
|
||||||
|
Total: 3 sessions, 26K disk usage
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Ask cleanup action**:
|
||||||
|
```
|
||||||
|
AskUserQuestion:
|
||||||
|
question: "Which sessions should I clean up?"
|
||||||
|
header: "Cleanup"
|
||||||
|
options:
|
||||||
|
- label: "Completed sessions only (Recommended)"
|
||||||
|
description: "Delete sessions where phase is verify/complete. Keeps active sessions safe."
|
||||||
|
- label: "Older than 14 days"
|
||||||
|
description: "Delete all sessions older than 14 days, regardless of status."
|
||||||
|
- label: "All except current"
|
||||||
|
description: "Delete everything except the most recent active session."
|
||||||
|
- label: "Cancel"
|
||||||
|
description: "Don't delete anything."
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Safety guards**:
|
||||||
|
- NEVER delete sessions where `current_phase` is not `verify` or `complete` AND `next_phase` exists, unless user explicitly chose age-based or all-except-current
|
||||||
|
- Warn before deleting active sessions: "Session {id} is still active (phase: {phase}). Delete anyway?"
|
||||||
|
|
||||||
|
6. **Execute cleanup**:
|
||||||
|
- For each session to delete: `rm -rf ~/.claude/config-audit/sessions/{session-id}/`
|
||||||
|
- Track deleted count and freed space
|
||||||
|
|
||||||
|
7. **Output summary**:
|
||||||
|
```
|
||||||
|
✓ Cleanup complete
|
||||||
|
|
||||||
|
Deleted: 2 sessions
|
||||||
|
Freed: 14K disk space
|
||||||
|
Remaining: 1 session (active)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Session Status Detection
|
||||||
|
|
||||||
|
A session is considered **active** if ALL of these are true:
|
||||||
|
- `current_phase` is not `verify` and not `complete`
|
||||||
|
- `next_phase` exists and is not empty
|
||||||
|
|
||||||
|
A session is considered **complete** if ANY of these are true:
|
||||||
|
- `current_phase` is `verify` or `complete`
|
||||||
|
- `next_phase` is empty or null
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- **Legacy path:** Also check `~/.config-audit/sessions/` for sessions created before v2.2.0. If found, include them in the session list and note: "Found {n} session(s) at legacy path (~/.config-audit/). These will be cleaned up normally."
|
||||||
|
- If `~/.claude/config-audit/sessions/` doesn't exist (and no legacy sessions): "No sessions found. Nothing to clean up."
|
||||||
|
- If no sessions match criteria: "No sessions match the selected criteria."
|
||||||
|
- If deletion fails: Log error, continue with other sessions
|
||||||
202
plugins/config-audit/commands/config-audit.md
Normal file
202
plugins/config-audit/commands/config-audit.md
Normal file
|
|
@ -0,0 +1,202 @@
|
||||||
|
---
|
||||||
|
name: config-audit
|
||||||
|
description: Claude Code Configuration Intelligence - audit, analyze, and optimize your configuration
|
||||||
|
argument-hint: "[posture|feature-gap|fix|rollback|plan|implement|help|discover|analyze|interview|drift|plugin-health|status|cleanup]"
|
||||||
|
allowed-tools: Read, Write, Glob, Grep, Bash, Agent, AskUserQuestion
|
||||||
|
model: opus
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Claude Code Configuration Intelligence
|
||||||
|
|
||||||
|
Analyze, report on, and optimize your Claude Code configuration.
|
||||||
|
|
||||||
|
## Router Logic
|
||||||
|
|
||||||
|
If a subcommand is provided, route to it:
|
||||||
|
- `posture` → `/config-audit:posture`
|
||||||
|
- `feature-gap` → `/config-audit:feature-gap`
|
||||||
|
- `fix` → `/config-audit:fix`
|
||||||
|
- `rollback` → `/config-audit:rollback`
|
||||||
|
- `plan` → `/config-audit:plan`
|
||||||
|
- `implement` → `/config-audit:implement`
|
||||||
|
- `help` → `/config-audit:help`
|
||||||
|
- `discover` → `/config-audit:discover`
|
||||||
|
- `analyze` → `/config-audit:analyze`
|
||||||
|
- `interview` → `/config-audit:interview`
|
||||||
|
- `drift` → `/config-audit:drift`
|
||||||
|
- `plugin-health` → `/config-audit:plugin-health`
|
||||||
|
- `status` → `/config-audit:status`
|
||||||
|
- `cleanup` → `/config-audit:cleanup`
|
||||||
|
|
||||||
|
If a scope override is provided (`current`, `repo`, `home`, `full`), use it as the scope type (see Scope Resolution below).
|
||||||
|
|
||||||
|
If no subcommand and no scope override: **run the default audit** (see below).
|
||||||
|
|
||||||
|
## UX Rules (MANDATORY — apply to every step)
|
||||||
|
|
||||||
|
1. **Narrate before acting.** Before each step, tell the user what you're about to do and why, in plain language.
|
||||||
|
2. **Never show raw output.** All scanner Bash commands MUST use `--output-file <path>` AND `2>/dev/null`. The user should NEVER see JSON, stderr progress lines, or exit codes.
|
||||||
|
3. **Handle exit codes silently.** Append `; echo $?` to scanner commands. Exit codes 0/1/2 are all expected (PASS/WARNING/FAIL). Only exit code 3 is a real error — tell user: "Scanner encountered an unexpected error. Try `/config-audit posture` for a quick check instead."
|
||||||
|
4. **Explain, don't dump.** When presenting findings, add plain-language context. "Grade B" alone means nothing — say "Grade B — your CLAUDE.md files are well-structured with minor improvements possible."
|
||||||
|
5. **Separate signal from noise.** If findings exist in `tests/fixtures/` or `examples/` directories, count them separately and exclude from the main count: "Found 37 findings (66 additional in test fixtures, excluded)."
|
||||||
|
6. **Context-sensitive next steps.** Don't just list commands — explain what each does and why the user might want it based on their specific results.
|
||||||
|
|
||||||
|
## Default Audit (no arguments)
|
||||||
|
|
||||||
|
### Step 1: Auto-detect scope and greet the user
|
||||||
|
|
||||||
|
If the user provided a scope override (`/config-audit full`, `/config-audit repo`, etc.), use that.
|
||||||
|
|
||||||
|
Otherwise, auto-detect:
|
||||||
|
1. Run `git rev-parse --show-toplevel 2>/dev/null` via Bash
|
||||||
|
2. If it succeeds and pwd is inside the repo → **repo** scope (use the git root path)
|
||||||
|
3. If pwd is `$HOME` → **home** scope
|
||||||
|
4. Otherwise → **current** directory scope
|
||||||
|
|
||||||
|
Show the user what's happening:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Config-Audit
|
||||||
|
|
||||||
|
Analyzing your Claude Code configuration...
|
||||||
|
|
||||||
|
**Scope:** {Repository|Home directory|Current directory} — `{path}`
|
||||||
|
**What this checks:** CLAUDE.md quality, settings validation, hook safety, rules correctness, MCP server config, import chains, conflicts, and feature coverage.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Initialize session
|
||||||
|
|
||||||
|
1. Generate session ID: `YYYYMMDD_HHmmss` format
|
||||||
|
2. Create session directory and findings subdirectory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.claude/config-audit/sessions/{session-id}/findings
|
||||||
|
```
|
||||||
|
|
||||||
|
This is a silent infrastructure step — do NOT show output to the user.
|
||||||
|
|
||||||
|
### Step 3: Run scanners and posture assessment
|
||||||
|
|
||||||
|
Tell the user: **"Running 8 configuration scanners..."**
|
||||||
|
|
||||||
|
Run both scanners and posture in a single Bash command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/scan-orchestrator.mjs <target-path> --output-file ~/.claude/config-audit/sessions/{session-id}/findings/scan-results.json [--full-machine] [--global] 2>/dev/null; node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file ~/.claude/config-audit/sessions/{session-id}/posture.json [--full-machine] [--global] 2>/dev/null; echo $?
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `--full-machine` for `full` scope, `--global` for `home` scope. For `repo` and `current`, pass the resolved path directly.
|
||||||
|
|
||||||
|
Check the echoed exit code:
|
||||||
|
- `0`, `1`, or `2` → continue normally
|
||||||
|
- `3` → tell user: "Scanner encountered an unexpected error. Try `/config-audit posture` for a quick check instead." and stop.
|
||||||
|
|
||||||
|
### Step 4: Analyze results
|
||||||
|
|
||||||
|
Tell the user: **"Scanners complete. Preparing your results..."**
|
||||||
|
|
||||||
|
Read BOTH output files using the Read tool:
|
||||||
|
- `~/.claude/config-audit/sessions/{session-id}/findings/scan-results.json`
|
||||||
|
- `~/.claude/config-audit/sessions/{session-id}/posture.json`
|
||||||
|
|
||||||
|
Extract these metrics from the JSON:
|
||||||
|
|
||||||
|
**From posture.json:**
|
||||||
|
- `overallGrade` — the health grade (A-F)
|
||||||
|
- `opportunityCount` — number of unused features detected
|
||||||
|
- `areas[]` — per-area grades and finding counts (use only quality areas, exclude Feature Coverage)
|
||||||
|
|
||||||
|
**From scan-results.json:**
|
||||||
|
- `aggregate.total_findings` — total findings (test fixture findings are already excluded automatically)
|
||||||
|
- `fixture_findings` array (if present) — count of findings excluded from test/example directories
|
||||||
|
- Count findings by severity from `aggregate.counts` (critical, high, medium, low, info)
|
||||||
|
- Count findings where `autoFixable: true`
|
||||||
|
- Note total `files_scanned` across scanners
|
||||||
|
|
||||||
|
### Step 5: Update state
|
||||||
|
|
||||||
|
Write session state (silent — no user output):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
session_id: "{session-id}"
|
||||||
|
current_phase: "analyze"
|
||||||
|
completed_phases: ["discover", "analyze"]
|
||||||
|
next_phase: "plan"
|
||||||
|
updated_at: "{ISO timestamp}"
|
||||||
|
scope_type: "{repo|home|current|full}"
|
||||||
|
target_path: "{resolved path}"
|
||||||
|
```
|
||||||
|
|
||||||
|
Write to: `~/.claude/config-audit/sessions/{session-id}/state.yaml`
|
||||||
|
|
||||||
|
### Step 6: Display results
|
||||||
|
|
||||||
|
Present results using this template. Replace all placeholders with actual values. **Adapt the summary sentence based on grade.**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Results
|
||||||
|
|
||||||
|
**Health: {overallGrade}** | {qualityAreaCount} areas scanned
|
||||||
|
|
||||||
|
{grade-based summary — pick ONE:}
|
||||||
|
- Grade A: "Excellent — your configuration is correct and well-maintained."
|
||||||
|
- Grade B: "Strong — your configuration is solid with minor improvements available."
|
||||||
|
- Grade C: "Decent — your configuration works but has some issues worth addressing."
|
||||||
|
- Grade D: "Needs work — several configuration issues could affect your Claude Code experience."
|
||||||
|
- Grade F: "Significant issues found — addressing these will meaningfully improve your workflow."
|
||||||
|
|
||||||
|
Scanned {files_scanned} files | {real_finding_count} findings ({severity_breakdown})
|
||||||
|
{If test_fixture_count > 0: "({test_fixture_count} additional findings in test fixtures were excluded.)"}
|
||||||
|
{If fixable_count > 0: "{fixable_count} of these can be auto-fixed."}
|
||||||
|
|
||||||
|
### Area Breakdown
|
||||||
|
|
||||||
|
| Area | Grade | Findings | |
|
||||||
|
|------|-------|----------|-|
|
||||||
|
| CLAUDE.md | {grade} | {count} | {one-phrase status} |
|
||||||
|
| Settings | {grade} | {count} | {status} |
|
||||||
|
| Hooks | {grade} | {count} | {status} |
|
||||||
|
| Rules | {grade} | {count} | {status} |
|
||||||
|
| MCP Servers | {grade} | {count} | {status} |
|
||||||
|
| Imports | {grade} | {count} | {status} |
|
||||||
|
| Conflicts | {grade} | {count} | {status} |
|
||||||
|
|
||||||
|
{For the status column, use plain language like: "Well structured", "2 minor issues", "Missing trust levels", "No issues", etc.}
|
||||||
|
|
||||||
|
{If opportunityCount > 0:}
|
||||||
|
{opportunityCount} feature opportunities available — run `/config-audit feature-gap` for context-aware recommendations.
|
||||||
|
|
||||||
|
### What you can do next
|
||||||
|
|
||||||
|
{Include only relevant options based on findings. Explain each one:}
|
||||||
|
|
||||||
|
{If fixable_count > 0:}
|
||||||
|
- **`/config-audit fix`** — Automatically fix {fixable_count} issues. Creates a backup first so you can roll back with one command.
|
||||||
|
|
||||||
|
{If real findings > fixable_count:}
|
||||||
|
- **`/config-audit plan`** — Get a prioritized action plan for the {remaining} issues that need manual attention.
|
||||||
|
|
||||||
|
{If grade is C or better:}
|
||||||
|
- **`/config-audit feature-gap`** — See which features could help your project, and implement the ones you want on the spot.
|
||||||
|
|
||||||
|
{If grade is D or F:}
|
||||||
|
- **`/config-audit fix`** should be your first step — it handles the most impactful issues automatically.
|
||||||
|
|
||||||
|
Session saved to: `~/.claude/config-audit/sessions/{session-id}/`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scope Resolution
|
||||||
|
|
||||||
|
| Scope | What gets scanned |
|
||||||
|
|-------|-------------------|
|
||||||
|
| `current` | Current directory + parent CLAUDE.md files up to root + `~/.claude/` |
|
||||||
|
| `repo` | Git repository root + `~/.claude/` |
|
||||||
|
| `home` | `~/.claude/` global configuration only |
|
||||||
|
| `full` | Everything: `~/.claude/`, managed paths, all dev dirs under $HOME |
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- If scanner fails (exit 3), tell the user in plain language and suggest `/config-audit posture` as fallback
|
||||||
|
- If path doesn't exist, tell the user: "That path doesn't exist. Run `/config-audit` without arguments to auto-detect."
|
||||||
|
- If git command fails for auto-detect, silently fall back to `current` scope
|
||||||
|
- If no CLAUDE.md found anywhere, explain: "No CLAUDE.md found. This is the main configuration file for Claude Code — creating one is the single highest-impact thing you can do. Run `/config-audit feature-gap` to see what's recommended."
|
||||||
141
plugins/config-audit/commands/discover.md
Normal file
141
plugins/config-audit/commands/discover.md
Normal file
|
|
@ -0,0 +1,141 @@
|
||||||
|
---
|
||||||
|
name: config-audit:discover
|
||||||
|
description: Phase 1 - Initialize session, auto-detect scope, and discover config files
|
||||||
|
argument-hint: "[current|repo|home|full] [--delta]"
|
||||||
|
allowed-tools: Read, Write, Edit, Glob, Grep, Agent, AskUserQuestion, Bash
|
||||||
|
model: opus
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Discover (Phase 1)
|
||||||
|
|
||||||
|
Initialize a new audit session and discover all Claude Code configuration files.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit discover # Auto-detect scope
|
||||||
|
/config-audit discover current # Force current directory scope
|
||||||
|
/config-audit discover repo # Force git repository scope
|
||||||
|
/config-audit discover home # Force home/global scope
|
||||||
|
/config-audit discover full # Force full machine scope
|
||||||
|
/config-audit discover --delta # Incremental re-scan (changed files only)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Initialize session and greet
|
||||||
|
|
||||||
|
Generate session ID (`YYYYMMDD_HHmmss`), create directories:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.claude/config-audit/sessions/{session-id}/findings 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Determine scope
|
||||||
|
|
||||||
|
If the user provided a scope argument, use it. Otherwise, auto-detect:
|
||||||
|
1. Run `git rev-parse --show-toplevel 2>/dev/null`
|
||||||
|
2. If inside a git repo → **repo** scope
|
||||||
|
3. If pwd is `$HOME` → **home** scope
|
||||||
|
4. Otherwise → **current** directory scope
|
||||||
|
|
||||||
|
Tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Configuration Discovery
|
||||||
|
|
||||||
|
**Scope:** {Repository|Home|Current directory|Full machine} — `{path}`
|
||||||
|
Finding all Claude Code configuration files (CLAUDE.md, settings, hooks, rules, MCP servers)...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Resolve paths
|
||||||
|
|
||||||
|
| Scope | What gets scanned |
|
||||||
|
|-------|-------------------|
|
||||||
|
| `current` | Current directory + parent CLAUDE.md files up to root + `~/.claude/` |
|
||||||
|
| `repo` | Git repo root + `~/.claude/` |
|
||||||
|
| `home` | `~/.claude/` only |
|
||||||
|
| `full` | `~/.claude/` (depth 10), managed paths, all dev dirs under $HOME |
|
||||||
|
|
||||||
|
### Step 4: Delta mode (if --delta)
|
||||||
|
|
||||||
|
If `--delta` flag:
|
||||||
|
1. Find previous baseline from `~/.claude/config-audit/sessions/*/discovery.json`
|
||||||
|
2. If no previous: "No previous scan found. Running full discovery instead."
|
||||||
|
3. Compare file mtimes/sizes to classify as changed/new/deleted/unchanged
|
||||||
|
4. Only scan changed + new files
|
||||||
|
|
||||||
|
### Step 5: Run discovery
|
||||||
|
|
||||||
|
Run the scan orchestrator silently to discover and scan files:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/scan-orchestrator.mjs <target-path> --output-file ~/.claude/config-audit/sessions/{session-id}/findings/scan-results.json [--full-machine] [--global] 2>/dev/null; echo $?
|
||||||
|
```
|
||||||
|
|
||||||
|
Check exit code: 0/1/2 → normal. 3 → "Discovery encountered an error. Try a narrower scope."
|
||||||
|
|
||||||
|
### Step 6: Save scope and state
|
||||||
|
|
||||||
|
Write `scope.yaml` and `state.yaml` to session directory. Update state with `current_phase: "discover"`, `next_phase: "analyze"`.
|
||||||
|
|
||||||
|
### Step 7: Present summary
|
||||||
|
|
||||||
|
Read the scan results file to count files and findings:
|
||||||
|
|
||||||
|
**Full scan:**
|
||||||
|
```markdown
|
||||||
|
### Discovery Complete
|
||||||
|
|
||||||
|
**{scope_type}** scope — found {total_files} configuration files:
|
||||||
|
|
||||||
|
| Type | Count |
|
||||||
|
|------|-------|
|
||||||
|
| CLAUDE.md | {n} |
|
||||||
|
| Settings | {n} |
|
||||||
|
| MCP configs | {n} |
|
||||||
|
| Rules | {n} |
|
||||||
|
| Hooks | {n} |
|
||||||
|
| Other | {n} |
|
||||||
|
|
||||||
|
Initial scan found {finding_count} items to review.
|
||||||
|
|
||||||
|
**Next:** Run `/config-audit analyze` to generate your analysis report.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Delta scan:**
|
||||||
|
```markdown
|
||||||
|
### Delta Discovery Complete
|
||||||
|
|
||||||
|
Compared against baseline from {previous-session-id}:
|
||||||
|
|
||||||
|
| Status | Files |
|
||||||
|
|--------|-------|
|
||||||
|
| Changed | {n} |
|
||||||
|
| New | {n} |
|
||||||
|
| Deleted | {n} |
|
||||||
|
| Unchanged | {n} |
|
||||||
|
|
||||||
|
Only {changed+new} file(s) scanned (vs {total} full scan).
|
||||||
|
|
||||||
|
**Next:** Run `/config-audit analyze` to generate your analysis report.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Config File Patterns
|
||||||
|
|
||||||
|
| Pattern | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `**/CLAUDE.md` | Project instructions |
|
||||||
|
| `**/CLAUDE.local.md` | Local overrides |
|
||||||
|
| `**/.claude/settings.json` | Project settings |
|
||||||
|
| `**/.mcp.json` | MCP servers |
|
||||||
|
| `**/.claude/rules/*.md` | Modular rules |
|
||||||
|
|
||||||
|
For global: `~/.claude/CLAUDE.md`, `~/.claude/settings.json`, `~/.claude.json`, `~/.claude/agents/*.md`
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- If scanner fails, report to user in plain language and suggest narrower scope
|
||||||
|
- If path doesn't exist, tell user and suggest alternatives
|
||||||
|
- If git command fails for `repo` scope, silently fall back to `current`
|
||||||
|
- If no config files found, explain: "No Claude Code configuration files found. Start with `/config-audit feature-gap` to see what's recommended."
|
||||||
98
plugins/config-audit/commands/drift.md
Normal file
98
plugins/config-audit/commands/drift.md
Normal file
|
|
@ -0,0 +1,98 @@
|
||||||
|
---
|
||||||
|
name: config-audit:drift
|
||||||
|
description: Compare current configuration against a saved baseline — shows new, resolved, and changed findings
|
||||||
|
argument-hint: "[path] [--baseline name] [--save]"
|
||||||
|
allowed-tools: Read, Write, Glob, Grep, Bash
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Drift Detection
|
||||||
|
|
||||||
|
Compare current configuration against a saved baseline to see what changed.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$ARGUMENTS` may contain:
|
||||||
|
- A target path (default: current working directory)
|
||||||
|
- `--save`: Save current state as baseline
|
||||||
|
- `--baseline <name>`: Compare against a specific named baseline (default: "default")
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Save a baseline
|
||||||
|
|
||||||
|
If `--save` is present:
|
||||||
|
|
||||||
|
Tell the user: **"Saving current configuration as baseline..."**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/drift-cli.mjs <path> --save --name <baseline-name> 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read stdout for confirmation. Tell the user:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Baseline Saved
|
||||||
|
|
||||||
|
Captured current state as baseline "{name}".
|
||||||
|
Run `/config-audit drift` anytime to see what changed since this point.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compare against baseline
|
||||||
|
|
||||||
|
Without `--save`:
|
||||||
|
|
||||||
|
Tell the user: **"Comparing current configuration against baseline..."**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/drift-cli.mjs <path> --baseline <name> 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read stdout. If baseline not found, tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
No baseline found. Save one first with:
|
||||||
|
/config-audit drift --save
|
||||||
|
```
|
||||||
|
|
||||||
|
Otherwise, parse and present the drift report:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Configuration Drift
|
||||||
|
|
||||||
|
**Trend:** {Improving|Degrading|Stable}
|
||||||
|
**Score:** {before} → {after} ({+/-delta} points)
|
||||||
|
|
||||||
|
{If new findings:}
|
||||||
|
#### New Issues ({count})
|
||||||
|
| ID | Severity | Description |
|
||||||
|
|----|----------|-------------|
|
||||||
|
| ... | ... | ... |
|
||||||
|
|
||||||
|
{If resolved findings:}
|
||||||
|
#### Resolved ({count})
|
||||||
|
| ID | Description |
|
||||||
|
|----|-------------|
|
||||||
|
| ... | ... |
|
||||||
|
|
||||||
|
{If area changes:}
|
||||||
|
#### Area Changes
|
||||||
|
| Area | Before | After | Change |
|
||||||
|
|------|--------|-------|--------|
|
||||||
|
| ... | ... | ... | ... |
|
||||||
|
```
|
||||||
|
|
||||||
|
### List baselines
|
||||||
|
|
||||||
|
If `$ARGUMENTS` contains `--list`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/drift-cli.mjs --list 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### What's next
|
||||||
|
|
||||||
|
After viewing drift:
|
||||||
|
- `/config-audit fix` — Auto-fix new findings
|
||||||
|
- `/config-audit posture` — Full posture assessment
|
||||||
|
- `/config-audit drift --save` — Update the baseline to current state
|
||||||
185
plugins/config-audit/commands/feature-gap.md
Normal file
185
plugins/config-audit/commands/feature-gap.md
Normal file
|
|
@ -0,0 +1,185 @@
|
||||||
|
---
|
||||||
|
name: config-audit:feature-gap
|
||||||
|
description: Context-aware feature recommendations — what could enhance your setup and why
|
||||||
|
argument-hint: "[path]"
|
||||||
|
allowed-tools: Read, Write, Edit, Glob, Grep, Bash, Agent, AskUserQuestion
|
||||||
|
model: opus
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Feature Opportunities
|
||||||
|
|
||||||
|
Context-aware analysis of Claude Code features that could benefit your specific project — with the option to implement selected recommendations on the spot.
|
||||||
|
|
||||||
|
## What the user gets
|
||||||
|
|
||||||
|
- Project context detection (language, size, existing configuration)
|
||||||
|
- Numbered recommendations grouped by impact (high / worth considering / explore)
|
||||||
|
- Each recommendation backed by evidence (Anthropic docs, proven issues)
|
||||||
|
- **Interactive selection: "Which would you like to implement?"**
|
||||||
|
- Direct implementation with backup for selected items
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Determine target and greet
|
||||||
|
|
||||||
|
Parse `$ARGUMENTS` for a path (default: current working directory).
|
||||||
|
|
||||||
|
Tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Feature Opportunities
|
||||||
|
|
||||||
|
Analyzing which Claude Code features could benefit your workflow...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Create session and run posture
|
||||||
|
|
||||||
|
Generate session ID (`YYYYMMDD_HHmmss`) if no active session exists.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.claude/config-audit/sessions/{session-id}/findings 2>/dev/null
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file ~/.claude/config-audit/sessions/{session-id}/posture.json 2>/dev/null; echo $?
|
||||||
|
```
|
||||||
|
|
||||||
|
If exit code is non-zero: "Assessment couldn't run. Check that the path exists and contains configuration files."
|
||||||
|
|
||||||
|
### Step 3: Read posture data and detect project context
|
||||||
|
|
||||||
|
Read `~/.claude/config-audit/sessions/{session-id}/posture.json` using the Read tool.
|
||||||
|
|
||||||
|
Extract GAP findings from `scannerEnvelope.scanners` (find scanner with `scanner === 'GAP'`).
|
||||||
|
|
||||||
|
Detect project context:
|
||||||
|
```bash
|
||||||
|
test -f <target-path>/package.json && echo "has_package_json" || echo "no_package_json"
|
||||||
|
ls <target-path>/*.py <target-path>/requirements.txt <target-path>/pyproject.toml 2>/dev/null | head -3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Build numbered recommendations
|
||||||
|
|
||||||
|
Read `${CLAUDE_PLUGIN_ROOT}/knowledge/gap-closure-templates.md` for implementation templates.
|
||||||
|
|
||||||
|
Group GAP findings into three sections. Number them sequentially across sections:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### High Impact
|
||||||
|
|
||||||
|
These address correctness or safety — consider them seriously.
|
||||||
|
|
||||||
|
**1.** Add permissions.deny for sensitive paths
|
||||||
|
→ Settings enforcement is stronger than CLAUDE.md instructions.
|
||||||
|
→ Effort: Low (5 min)
|
||||||
|
|
||||||
|
**2.** Configure at least one hook for safety automation
|
||||||
|
→ Hooks guarantee the action happens. CLAUDE.md instructions are advisory.
|
||||||
|
→ Effort: Medium (15 min)
|
||||||
|
|
||||||
|
### Worth Considering
|
||||||
|
|
||||||
|
These improve workflow efficiency for projects like yours.
|
||||||
|
|
||||||
|
**3.** Split CLAUDE.md into focused modules with @imports
|
||||||
|
→ Files over 200 lines degrade Claude's adherence to instructions.
|
||||||
|
→ Effort: Low (10 min)
|
||||||
|
|
||||||
|
**4.** Add path-scoped rules for different file types
|
||||||
|
→ Unscoped rules load every session regardless of relevance.
|
||||||
|
→ Effort: Low (10 min)
|
||||||
|
|
||||||
|
### Explore When Ready
|
||||||
|
|
||||||
|
Nice-to-have. Skip if your current setup works well.
|
||||||
|
|
||||||
|
**5.** Custom keybindings (Shift+Enter for newline)
|
||||||
|
→ Effort: Low (2 min)
|
||||||
|
|
||||||
|
**6.** Status line configuration
|
||||||
|
→ Effort: Low (2 min)
|
||||||
|
```
|
||||||
|
|
||||||
|
Each recommendation MUST have:
|
||||||
|
- A number
|
||||||
|
- A one-line description
|
||||||
|
- A "Why" with evidence
|
||||||
|
- An effort estimate from the templates
|
||||||
|
|
||||||
|
### Step 5: Ask what to implement
|
||||||
|
|
||||||
|
```
|
||||||
|
AskUserQuestion:
|
||||||
|
question: "Which would you like to implement? I'll create a backup first."
|
||||||
|
options:
|
||||||
|
- "All high impact (1-2)"
|
||||||
|
- "Pick specific: e.g. 1,3,5"
|
||||||
|
- "None — just wanted to see the recommendations"
|
||||||
|
```
|
||||||
|
|
||||||
|
If "None": show the full report location and exit.
|
||||||
|
|
||||||
|
If the user picks numbers: parse the selection and proceed to Step 6.
|
||||||
|
|
||||||
|
### Step 6: Implement selected recommendations
|
||||||
|
|
||||||
|
For each selected recommendation:
|
||||||
|
|
||||||
|
1. **Create backup** of any files that will be modified:
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/fix-cli.mjs <target-path> --json 2>/dev/null
|
||||||
|
```
|
||||||
|
Or create manual backup:
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.claude/config-audit/backups/$(date +%Y%m%d_%H%M%S)/files/ 2>/dev/null
|
||||||
|
```
|
||||||
|
Copy each file that will be touched.
|
||||||
|
|
||||||
|
2. **Apply the template** from gap-closure-templates.md. Use the Write or Edit tool to create or modify the relevant configuration file.
|
||||||
|
|
||||||
|
3. **Show progress** as each item is done:
|
||||||
|
```
|
||||||
|
Implementing 3 recommendations...
|
||||||
|
|
||||||
|
✓ 1. permissions.deny — added to .claude/settings.json
|
||||||
|
✓ 3. Modular CLAUDE.md — created .claude/rules/testing.md, added @import
|
||||||
|
✓ 5. Keybindings — created ~/.claude/keybindings.json
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Verify** by re-running posture:
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file /tmp/config-audit-verify-$$.json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Show results
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Done
|
||||||
|
|
||||||
|
**{N} recommendations implemented** | Backup created
|
||||||
|
|
||||||
|
{If health grade changed:}
|
||||||
|
Health: {old_grade} → {new_grade} (+{delta} points)
|
||||||
|
|
||||||
|
{Show remaining opportunities if any:}
|
||||||
|
{remaining} more opportunities available — run `/config-audit feature-gap` again anytime.
|
||||||
|
|
||||||
|
**Rollback:** If anything looks wrong, run `/config-audit rollback` to restore.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Guidelines
|
||||||
|
|
||||||
|
When implementing recommendations, be smart about context:
|
||||||
|
|
||||||
|
- **permissions.deny**: Look at the project for common sensitive paths (`.env`, `secrets/`, `.git/config`, `*.pem`). Don't just copy a template blindly — check what actually exists.
|
||||||
|
- **hooks**: Start with a simple, useful hook. Don't scaffold 5 hooks at once.
|
||||||
|
- **path-scoped rules**: Look at the project's file structure to determine meaningful scopes (e.g., `tests/**/*.ts` vs `src/**/*.ts`).
|
||||||
|
- **CLAUDE.md modularization**: Only suggest splitting if the file is over 100 lines. Read it first to find natural section boundaries.
|
||||||
|
- **MCP setup**: Only relevant if the user actually has external tools to connect. Ask before creating.
|
||||||
|
- **Custom plugin**: Too complex for inline implementation — suggest `/config-audit plan` instead.
|
||||||
|
|
||||||
|
For items that genuinely need user input (e.g., "which MCP servers do you use?"), ask briefly during implementation rather than skipping them.
|
||||||
|
|
||||||
|
## Safety
|
||||||
|
|
||||||
|
- **Backup mandatory** — always create before modifying
|
||||||
|
- **Show what's changing** — the user sees each change as it happens
|
||||||
|
- **Rollback available** — `/config-audit rollback` at any time
|
||||||
|
- **Non-destructive** — only create new files or add to existing; never delete content
|
||||||
138
plugins/config-audit/commands/fix.md
Normal file
138
plugins/config-audit/commands/fix.md
Normal file
|
|
@ -0,0 +1,138 @@
|
||||||
|
---
|
||||||
|
name: config-audit:fix
|
||||||
|
description: Auto-fix deterministic configuration issues with backup and verification
|
||||||
|
argument-hint: "[path] [--dry-run]"
|
||||||
|
allowed-tools: Read, Write, Glob, Grep, Bash, AskUserQuestion
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Fix
|
||||||
|
|
||||||
|
Auto-fix deterministic configuration issues. Scans, plans fixes, backs up originals, applies changes, and verifies results.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$ARGUMENTS` may contain:
|
||||||
|
- A target path (default: current working directory)
|
||||||
|
- `--dry-run`: Show fix plan without applying
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Greet and scan
|
||||||
|
|
||||||
|
Tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Config-Audit Fix
|
||||||
|
|
||||||
|
Scanning for auto-fixable issues...
|
||||||
|
```
|
||||||
|
|
||||||
|
Run scanners silently:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/scan-orchestrator.mjs <path> --output-file /tmp/config-audit-fix-scan-$$.json [--global] 2>/dev/null; echo $?
|
||||||
|
```
|
||||||
|
|
||||||
|
Exit code 3 → tell user: "Scanner error. Try `/config-audit posture` to check your configuration."
|
||||||
|
|
||||||
|
### Step 2: Plan fixes
|
||||||
|
|
||||||
|
Run fix planner silently:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/fix-cli.mjs <path> --json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read the JSON output. Categorize fixes into auto-fixable and manual.
|
||||||
|
|
||||||
|
### Step 3: Present fix plan
|
||||||
|
|
||||||
|
Show what will be fixed and what needs manual attention:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Fix Plan
|
||||||
|
|
||||||
|
**Auto-fixable ({N} issues):**
|
||||||
|
|
||||||
|
| # | ID | Issue | File |
|
||||||
|
|---|-----|-------|------|
|
||||||
|
| 1 | CA-SET-003 | Add $schema to settings.json | .claude/settings.json |
|
||||||
|
| 2 | ... | ... | ... |
|
||||||
|
|
||||||
|
**Manual ({M} issues — require human judgment):**
|
||||||
|
|
||||||
|
| # | ID | Issue | Recommendation |
|
||||||
|
|---|-----|-------|----------------|
|
||||||
|
| 1 | CA-CML-003 | CLAUDE.md exceeds 200 lines | Split content into @imports or .claude/rules/ |
|
||||||
|
| ... | ... | ... | ... |
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Confirm with user
|
||||||
|
|
||||||
|
If not `--dry-run`, ask for confirmation:
|
||||||
|
|
||||||
|
```
|
||||||
|
AskUserQuestion:
|
||||||
|
question: "Apply {N} auto-fixes? A backup is created first — you can roll back anytime."
|
||||||
|
options:
|
||||||
|
- "Yes, apply fixes"
|
||||||
|
- "Show dry-run only"
|
||||||
|
- "Cancel"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Apply fixes
|
||||||
|
|
||||||
|
If confirmed, apply:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/fix-cli.mjs <path> --apply --json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read the JSON output to get applied/failed counts and backup location.
|
||||||
|
|
||||||
|
### Step 6: Show results
|
||||||
|
|
||||||
|
Run a quick posture check to measure improvement:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <path> --json --output-file /tmp/config-audit-fix-posture-$$.json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Present results:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Results
|
||||||
|
|
||||||
|
**{applied} fixed** | {failed} failed | Backup created
|
||||||
|
|
||||||
|
{If grade improved:}
|
||||||
|
Score impact: {old_grade} ({old_score}) → {new_grade} ({new_score}) — **+{delta} points**
|
||||||
|
|
||||||
|
{If failed > 0:}
|
||||||
|
{failed} fix(es) couldn't be applied — run `/config-audit plan` for alternative approaches.
|
||||||
|
|
||||||
|
**Rollback:** If anything looks wrong, run `/config-audit rollback {backup-id}` to restore.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Manual findings
|
||||||
|
|
||||||
|
If manual findings exist:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Needs manual attention
|
||||||
|
|
||||||
|
These {M} issues require human judgment:
|
||||||
|
|
||||||
|
1. **{title}** ({id}) — {recommendation}
|
||||||
|
2. ...
|
||||||
|
|
||||||
|
Run `/config-audit plan` to get a step-by-step guide for addressing these.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Safety
|
||||||
|
|
||||||
|
- Backup is **mandatory** — every fix creates a backup first
|
||||||
|
- Dry-run by default — user must confirm before changes
|
||||||
|
- Verify after fix — re-scans to confirm findings resolved
|
||||||
|
- Rollback always available — `/config-audit rollback <backup-id>`
|
||||||
78
plugins/config-audit/commands/help.md
Normal file
78
plugins/config-audit/commands/help.md
Normal file
|
|
@ -0,0 +1,78 @@
|
||||||
|
---
|
||||||
|
name: config-audit:help
|
||||||
|
description: Show all available config-audit commands
|
||||||
|
allowed-tools: Read
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Help
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
Just run `/config-audit` — it auto-detects your project scope and runs a full audit. No setup needed.
|
||||||
|
|
||||||
|
## All Commands
|
||||||
|
|
||||||
|
### Core
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit` | Full audit with auto-scope detection |
|
||||||
|
| `/config-audit posture` | Quick scorecard with A-F grades per area |
|
||||||
|
| `/config-audit feature-gap` | Deep analysis of features you're not using |
|
||||||
|
| `/config-audit fix` | Auto-fix deterministic issues with backup |
|
||||||
|
| `/config-audit rollback` | Restore configuration from a backup |
|
||||||
|
|
||||||
|
### Planning & Implementation
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit plan` | Generate prioritized action plan from audit findings |
|
||||||
|
| `/config-audit implement` | Execute action plan with automatic backup + verification |
|
||||||
|
| `/config-audit interview` | Set preferences to customize the action plan _(optional)_ |
|
||||||
|
|
||||||
|
### Monitoring
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit drift` | Compare current config against a saved baseline |
|
||||||
|
| `/config-audit plugin-health` | Audit plugin structure and frontmatter quality |
|
||||||
|
|
||||||
|
### Utility
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit status` | Show current session state and progress |
|
||||||
|
| `/config-audit cleanup` | Clean up old session directories |
|
||||||
|
|
||||||
|
### Advanced (workflow phases)
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/config-audit discover` | Run only the discovery phase (find config files) |
|
||||||
|
| `/config-audit analyze` | Run only the analysis phase (generate report) |
|
||||||
|
|
||||||
|
## Scope Override
|
||||||
|
|
||||||
|
By default, `/config-audit` auto-detects scope from your current directory:
|
||||||
|
- Inside a git repo → scans the repo
|
||||||
|
- In `$HOME` → scans global config only
|
||||||
|
- Elsewhere → scans current directory
|
||||||
|
|
||||||
|
Override with: `/config-audit current`, `/config-audit repo`, `/config-audit home`, `/config-audit full`
|
||||||
|
|
||||||
|
## Typical Workflows
|
||||||
|
|
||||||
|
**First time?** Just run `/config-audit`.
|
||||||
|
|
||||||
|
**Want to fix things?** Run `/config-audit` then `/config-audit fix`.
|
||||||
|
|
||||||
|
**Full optimization:**
|
||||||
|
1. `/config-audit` — see what you have
|
||||||
|
2. `/config-audit plan` — create action plan
|
||||||
|
3. `/config-audit implement` — execute with backups
|
||||||
|
|
||||||
|
**Track changes over time:**
|
||||||
|
1. `/config-audit drift --save` — save baseline
|
||||||
|
2. _(make changes)_
|
||||||
|
3. `/config-audit drift` — see what changed
|
||||||
132
plugins/config-audit/commands/implement.md
Normal file
132
plugins/config-audit/commands/implement.md
Normal file
|
|
@ -0,0 +1,132 @@
|
||||||
|
---
|
||||||
|
name: config-audit:implement
|
||||||
|
description: Phase 5 - Execute action plan with backups and verification
|
||||||
|
allowed-tools: Read, Write, Edit, Bash, Agent, AskUserQuestion
|
||||||
|
model: opus
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Implementation (Phase 5)
|
||||||
|
|
||||||
|
Execute the action plan with full backup, verification, and rollback support.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Must have completed Phase 4 (plan)
|
||||||
|
- Action plan at `~/.claude/config-audit/sessions/{session-id}/action-plan.md`
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Load and verify
|
||||||
|
|
||||||
|
Find the most recent session with a plan. If none: "No action plan found. Run `/config-audit plan` first."
|
||||||
|
|
||||||
|
Read the action plan and count actions. Tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Implementing Action Plan
|
||||||
|
|
||||||
|
Found {N} actions to execute across {M} files.
|
||||||
|
A backup will be created before any changes are made.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Get user approval
|
||||||
|
|
||||||
|
```
|
||||||
|
AskUserQuestion:
|
||||||
|
question: "Ready to implement {N} actions? Backup created automatically — you can roll back with one command."
|
||||||
|
options:
|
||||||
|
- "Yes, proceed"
|
||||||
|
- "Review plan first" (then show the plan file path)
|
||||||
|
- "Cancel"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Create backup
|
||||||
|
|
||||||
|
Create backup silently:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p ~/.claude/config-audit/backups/$(date +%Y%m%d_%H%M%S)/files/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy each file to be modified. Generate `manifest.yaml` with checksums.
|
||||||
|
|
||||||
|
Tell the user: **"Backup created. Implementing actions..."**
|
||||||
|
|
||||||
|
### Step 4: Execute actions
|
||||||
|
|
||||||
|
Group actions by dependencies. For each group, spawn implementer agents (batch of 3):
|
||||||
|
|
||||||
|
```
|
||||||
|
Agent(subagent_type: "config-audit:implementer-agent")
|
||||||
|
model: sonnet
|
||||||
|
prompt: |
|
||||||
|
Execute action: {action-id}
|
||||||
|
File: {file-path}, Type: {create|modify|delete}
|
||||||
|
Details: {changes}
|
||||||
|
Verify backup exists, make change, validate syntax.
|
||||||
|
Append result to: ~/.claude/config-audit/sessions/{session-id}/implementation-log.md
|
||||||
|
```
|
||||||
|
|
||||||
|
Show progress between groups:
|
||||||
|
|
||||||
|
```
|
||||||
|
Action 1/N: {title} — done
|
||||||
|
Action 2/N: {title} — done
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Verify results
|
||||||
|
|
||||||
|
Spawn verifier agent:
|
||||||
|
|
||||||
|
```
|
||||||
|
Agent(subagent_type: "config-audit:verifier-agent")
|
||||||
|
model: sonnet (note: using sonnet, not haiku)
|
||||||
|
prompt: |
|
||||||
|
Verify all changes from implementation:
|
||||||
|
1. Modified files exist and are syntactically valid
|
||||||
|
2. New files created correctly
|
||||||
|
3. No new conflicts introduced
|
||||||
|
Report to: ~/.claude/config-audit/sessions/{session-id}/implementation-log.md
|
||||||
|
```
|
||||||
|
|
||||||
|
If verifier finds issues: one retry with implementer agent. If still failing: report and suggest rollback.
|
||||||
|
|
||||||
|
### Step 6: Present results
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Implementation Complete
|
||||||
|
|
||||||
|
**{succeeded} succeeded** | {failed} failed | {skipped} skipped
|
||||||
|
|
||||||
|
{If score improved, run quick posture and show:}
|
||||||
|
Score impact: {old_grade} → {new_grade} (+{delta} points)
|
||||||
|
|
||||||
|
{If failed > 0:}
|
||||||
|
{failed} action(s) couldn't be completed — see log for details.
|
||||||
|
|
||||||
|
**Backup location:** `~/.claude/config-audit/backups/{timestamp}/`
|
||||||
|
**Rollback:** `/config-audit rollback {timestamp}`
|
||||||
|
**Full log:** `~/.claude/config-audit/sessions/{session-id}/implementation-log.md`
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Update state
|
||||||
|
|
||||||
|
Update `state.yaml` with `current_phase: "implement"`, `next_phase: null`.
|
||||||
|
|
||||||
|
## Rollback
|
||||||
|
|
||||||
|
If the user requests rollback at any point:
|
||||||
|
1. Read `manifest.yaml` from backup
|
||||||
|
2. Restore each file and verify checksums
|
||||||
|
3. Delete newly created files
|
||||||
|
4. Update state to `rolled_back`
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | What happens |
|
||||||
|
|-------|-------------|
|
||||||
|
| Permission denied | Skip action, log it, continue with others |
|
||||||
|
| File not found | Skip action, log it, continue |
|
||||||
|
| Invalid syntax after edit | Rollback that single file, log, continue |
|
||||||
|
| Critical failure | Offer full rollback |
|
||||||
64
plugins/config-audit/commands/interview.md
Normal file
64
plugins/config-audit/commands/interview.md
Normal file
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
name: config-audit:interview
|
||||||
|
description: Phase 3 - Interactive interview to gather user preferences
|
||||||
|
allowed-tools: Read, Write, Edit, AskUserQuestion
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Interview (Phase 3)
|
||||||
|
|
||||||
|
Gather user preferences to inform the action plan.
|
||||||
|
|
||||||
|
## IMPORTANT: Inline Execution Only
|
||||||
|
|
||||||
|
This command runs AskUserQuestion **directly in the main context** — NOT via a Task subagent.
|
||||||
|
AskUserQuestion requires synchronous terminal interaction and does not work when delegated to a Task subagent.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Must have completed Phase 2 (analysis)
|
||||||
|
- Read analysis from `~/.claude/config-audit/sessions/{session-id}/analysis-report.md`
|
||||||
|
|
||||||
|
## Implementation Steps
|
||||||
|
|
||||||
|
1. **Load session state**: Verify analysis phase completed, read analysis report for context
|
||||||
|
2. **Conduct interview inline**: Use AskUserQuestion tool directly (NOT via Task). Adapt questions based on analysis findings.
|
||||||
|
3. **Save interview results**: Write to `~/.claude/config-audit/sessions/{session-id}/interview.md`
|
||||||
|
4. **Update state** (see state-management rule)
|
||||||
|
5. **Output summary**
|
||||||
|
|
||||||
|
## Interview Questions
|
||||||
|
|
||||||
|
Ask these using AskUserQuestion (skip questions that don't apply based on analysis):
|
||||||
|
|
||||||
|
1. **Config Style** — Centralized vs Distributed vs Hybrid organization
|
||||||
|
2. **Unused Hooks** — Wire up, review individually, delete, or leave (only if found)
|
||||||
|
3. **Duplicate Permissions** — Remove from local, consolidate, or keep (only if found)
|
||||||
|
4. **Modular Rules** — Use .claude/rules/ pattern? Yes/No
|
||||||
|
5. **Path-Scoped Rules** — Which patterns (tests, src, config, docs) — only if Q4=Yes
|
||||||
|
6. **Conflict Resolution** — Per-conflict: global vs project vs custom value (only if conflicts found)
|
||||||
|
7. **Permission Audit** — Audit or keep (only if >30 patterns in settings.local.json)
|
||||||
|
8. **Project Inheritance** — Per-project: inherit or isolate (only if multiple projects)
|
||||||
|
|
||||||
|
## Adaptive Questioning
|
||||||
|
|
||||||
|
Skip questions that don't apply:
|
||||||
|
- No unused hooks question if all hooks are wired
|
||||||
|
- No duplicates question if no duplicates found
|
||||||
|
- No conflict questions if no conflicts detected
|
||||||
|
- No path-scoping if user said no to modular rules
|
||||||
|
- Fewer project questions if only one project
|
||||||
|
- No permission audit if <30 patterns
|
||||||
|
|
||||||
|
## Skip Interview Option
|
||||||
|
|
||||||
|
If user runs `/config-audit plan` without interview:
|
||||||
|
- Use sensible defaults (centralized, inherit, enable rules)
|
||||||
|
- Flag decisions in plan as "assumed"
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- If user selects "Other" for any question, ask follow-up with AskUserQuestion
|
||||||
|
- If interview is cancelled, save partial results
|
||||||
|
- If no analysis report found, report error and exit
|
||||||
|
- If AskUserQuestion fails, STOP — do not use alternative methods
|
||||||
82
plugins/config-audit/commands/plan.md
Normal file
82
plugins/config-audit/commands/plan.md
Normal file
|
|
@ -0,0 +1,82 @@
|
||||||
|
---
|
||||||
|
name: config-audit:plan
|
||||||
|
description: Phase 4 - Generate prioritized action plan with risk assessment
|
||||||
|
allowed-tools: Read, Write, Glob, Grep, Agent
|
||||||
|
model: opus
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Plan Generation (Phase 4)
|
||||||
|
|
||||||
|
Generate a prioritized action plan based on analysis results.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Must have completed Phase 2 (analysis)
|
||||||
|
- Phase 3 (interview) is optional — plan works with or without it
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Verify session state
|
||||||
|
|
||||||
|
Find the most recent session with analysis completed. If none found: "No analysis results found. Run `/config-audit` first to scan your configuration."
|
||||||
|
|
||||||
|
### Step 2: Tell the user what's happening
|
||||||
|
|
||||||
|
```
|
||||||
|
## Creating Action Plan
|
||||||
|
|
||||||
|
Building a prioritized plan based on your analysis results...
|
||||||
|
Actions are ordered by impact, with risk assessment and dependency tracking.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Spawn planner agent
|
||||||
|
|
||||||
|
Tell the user: **"Generating your action plan (this takes about 30 seconds)..."**
|
||||||
|
|
||||||
|
```
|
||||||
|
Agent(subagent_type: "config-audit:planner-agent")
|
||||||
|
model: opus
|
||||||
|
prompt: |
|
||||||
|
Generate action plan based on:
|
||||||
|
- Analysis: ~/.claude/config-audit/sessions/{session-id}/analysis-report.md
|
||||||
|
- Interview: ~/.claude/config-audit/sessions/{session-id}/interview.md (if exists)
|
||||||
|
Create prioritized plan with:
|
||||||
|
- Risk assessment per action (low/medium/high)
|
||||||
|
- Rollback strategy
|
||||||
|
- Dependency ordering
|
||||||
|
- Effort estimates
|
||||||
|
Output to: ~/.claude/config-audit/sessions/{session-id}/action-plan.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Present the plan summary
|
||||||
|
|
||||||
|
Read the generated plan and show a concise overview:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Action Plan Ready
|
||||||
|
|
||||||
|
**{N} actions** organized by priority:
|
||||||
|
|
||||||
|
| # | Action | Risk | Effort |
|
||||||
|
|---|--------|------|--------|
|
||||||
|
| 1 | {title} | {low/med/high} | {quick/moderate/involved} |
|
||||||
|
| 2 | ... | ... | ... |
|
||||||
|
| ... | ... | ... | ... |
|
||||||
|
|
||||||
|
Full plan: `~/.claude/config-audit/sessions/{session-id}/action-plan.md`
|
||||||
|
|
||||||
|
You can edit the plan file to remove, reorder, or modify actions before implementing.
|
||||||
|
|
||||||
|
### What's next
|
||||||
|
|
||||||
|
- **`/config-audit implement`** — Execute the plan with automatic backup and verification
|
||||||
|
- **`/config-audit interview`** — Set preferences first to customize the plan (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Update state
|
||||||
|
|
||||||
|
Update `state.yaml` with `current_phase: "plan"`, `next_phase: "implement"`.
|
||||||
|
|
||||||
|
## Plan Modification
|
||||||
|
|
||||||
|
Users can edit `action-plan.md` before implementation — remove unwanted actions, adjust priority, or add custom actions. The implementer parses the modified plan.
|
||||||
74
plugins/config-audit/commands/plugin-health.md
Normal file
74
plugins/config-audit/commands/plugin-health.md
Normal file
|
|
@ -0,0 +1,74 @@
|
||||||
|
---
|
||||||
|
name: config-audit:plugin-health
|
||||||
|
description: Audit plugin configuration quality — validates structure, frontmatter, and cross-plugin coherence
|
||||||
|
argument-hint: "[plugin-path]"
|
||||||
|
allowed-tools: Read, Glob, Grep, Bash
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Plugin Health
|
||||||
|
|
||||||
|
Audit Claude Code plugin structure and quality — validates plugin.json, CLAUDE.md, command/agent frontmatter, and detects cross-plugin conflicts.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$ARGUMENTS` may contain a path to a specific plugin directory
|
||||||
|
- If omitted: scans all plugins in the marketplace root
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Discover plugins and greet
|
||||||
|
|
||||||
|
If a specific path is given, scan only that plugin. Otherwise, find all plugins using Glob for `**/.claude-plugin/plugin.json`.
|
||||||
|
|
||||||
|
Tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Plugin Health Check
|
||||||
|
|
||||||
|
Auditing {N} plugin(s) for structure, frontmatter quality, and cross-plugin conflicts...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Run scanner
|
||||||
|
|
||||||
|
Run silently for each plugin:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/plugin-health-scanner.mjs <path> 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read stdout output (JSON). Parse findings.
|
||||||
|
|
||||||
|
### Step 3: Present results
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Plugin Health Report
|
||||||
|
|
||||||
|
| Plugin | Grade | Commands | Agents | Status |
|
||||||
|
|--------|-------|----------|--------|--------|
|
||||||
|
| {name} | {grade} ({score}) | {cmd_count} | {agent_count} | {Good/Issues found} |
|
||||||
|
| ... | ... | ... | ... | ... |
|
||||||
|
|
||||||
|
{If cross-plugin issues:}
|
||||||
|
#### Cross-Plugin Issues ({count})
|
||||||
|
| Issue | Plugins | Recommendation |
|
||||||
|
|-------|---------|----------------|
|
||||||
|
| ... | ... | ... |
|
||||||
|
|
||||||
|
{If findings:}
|
||||||
|
#### Findings by Plugin
|
||||||
|
|
||||||
|
**{plugin-name}** ({finding_count} findings):
|
||||||
|
1. [{id}] {title} — {recommendation}
|
||||||
|
2. ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Suggest next steps
|
||||||
|
|
||||||
|
```
|
||||||
|
### What's next
|
||||||
|
|
||||||
|
- Fix structural issues based on recommendations above
|
||||||
|
- `/config-audit posture` — Full configuration posture assessment
|
||||||
|
- `/config-audit fix` — Auto-fix deterministic issues
|
||||||
|
```
|
||||||
120
plugins/config-audit/commands/posture.md
Normal file
120
plugins/config-audit/commands/posture.md
Normal file
|
|
@ -0,0 +1,120 @@
|
||||||
|
---
|
||||||
|
name: config-audit:posture
|
||||||
|
description: Quick configuration health assessment — scorecard with A-F grades
|
||||||
|
argument-hint: "[path] [--drift] [--plugin-health]"
|
||||||
|
allowed-tools: Read, Write, Glob, Grep, Bash
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Health Assessment
|
||||||
|
|
||||||
|
Quick, deterministic configuration health scorecard. No agents needed — runs all scanners + scoring in one pass.
|
||||||
|
|
||||||
|
## What the user gets
|
||||||
|
|
||||||
|
- Health grade (A-F) with plain-language explanation
|
||||||
|
- Per-area breakdown for 7 quality areas with grades and actionable notes
|
||||||
|
- Opportunity count — how many features could enhance their setup (not a grade)
|
||||||
|
- Grade-appropriate next steps
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Step 1: Determine target
|
||||||
|
|
||||||
|
Parse `$ARGUMENTS` for a path (default: current working directory). Resolve relative paths.
|
||||||
|
|
||||||
|
Tell the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Configuration Health
|
||||||
|
|
||||||
|
Running quick assessment{if path != cwd: " on `{path}`"}...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Run posture scanner
|
||||||
|
|
||||||
|
Run silently — all output goes to a file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file /tmp/config-audit-posture-$$.json 2>/dev/null; echo $?
|
||||||
|
```
|
||||||
|
|
||||||
|
If exit code is non-zero, tell the user: "Assessment couldn't complete. Check that the path exists and contains Claude Code configuration files."
|
||||||
|
|
||||||
|
### Step 3: Read and interpret results
|
||||||
|
|
||||||
|
Read the JSON output file using the Read tool. Extract:
|
||||||
|
|
||||||
|
- `overallGrade`, `opportunityCount`
|
||||||
|
- `areas[]` — each with `name`, `grade`, `score`, `findingCount`
|
||||||
|
|
||||||
|
### Step 4: Present the scorecard
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
**Health: {overallGrade}** | {qualityAreaCount} areas scanned
|
||||||
|
|
||||||
|
{grade-based context — pick ONE:}
|
||||||
|
- A: "Your configuration is correct and well-maintained."
|
||||||
|
- B: "Solid configuration with minor improvements available."
|
||||||
|
- C: "Working configuration with some issues worth addressing."
|
||||||
|
- D: "Configuration needs attention in several areas."
|
||||||
|
- F: "Significant issues found — addressing these will improve your experience."
|
||||||
|
|
||||||
|
### Area Scores
|
||||||
|
|
||||||
|
| Area | Grade | Score | Findings | |
|
||||||
|
|------|-------|-------|----------|-|
|
||||||
|
{for each area EXCEPT Feature Coverage:}
|
||||||
|
| {name} | {grade} | {score}/100 | {findingCount} | {plain-language note: A="Excellent", B="Good", C="Needs work", D/F="Issues found"} |
|
||||||
|
|
||||||
|
{if opportunityCount > 0:}
|
||||||
|
{opportunityCount} feature opportunities available — run `/config-audit feature-gap` for context-aware recommendations.
|
||||||
|
|
||||||
|
### What's next
|
||||||
|
```
|
||||||
|
|
||||||
|
**Grade A or B:**
|
||||||
|
```
|
||||||
|
Your configuration health is strong. Re-run after major changes to catch regressions.
|
||||||
|
For feature recommendations: `/config-audit feature-gap`
|
||||||
|
```
|
||||||
|
|
||||||
|
**Grade C:**
|
||||||
|
```
|
||||||
|
Run `/config-audit fix` to auto-fix what's possible, then `/config-audit plan` for a prioritized improvement path.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Grade D or F:**
|
||||||
|
```
|
||||||
|
Start with `/config-audit fix` — it handles the most impactful issues automatically with backup and rollback.
|
||||||
|
Then run `/config-audit plan` for a step-by-step path to a better configuration.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Optional sections
|
||||||
|
|
||||||
|
**If `--drift` flag is present:**
|
||||||
|
|
||||||
|
Run drift comparison silently:
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/drift-cli.mjs <target-path> 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read stdout output and append a "Configuration Drift" section showing what changed since the last baseline.
|
||||||
|
|
||||||
|
**If `--plugin-health` flag is present:**
|
||||||
|
|
||||||
|
Run plugin health scanner silently:
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/plugin-health-scanner.mjs <target-path> 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Read stdout output and append a "Plugin Health" section.
|
||||||
|
|
||||||
|
**If both flags:** Use `scanners/lib/report-generator.mjs` to produce a unified markdown report.
|
||||||
|
|
||||||
|
### Step 6: Save to session (if active)
|
||||||
|
|
||||||
|
If a config-audit session exists, save results:
|
||||||
|
```bash
|
||||||
|
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file ~/.claude/config-audit/sessions/<session-id>/posture.json 2>/dev/null
|
||||||
|
```
|
||||||
83
plugins/config-audit/commands/rollback.md
Normal file
83
plugins/config-audit/commands/rollback.md
Normal file
|
|
@ -0,0 +1,83 @@
|
||||||
|
---
|
||||||
|
name: config-audit:rollback
|
||||||
|
description: Restore configuration from backup — list available backups or rollback a specific one
|
||||||
|
argument-hint: "[backup-id]"
|
||||||
|
allowed-tools: Read, Write, Glob, Grep, Bash, AskUserQuestion
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Rollback
|
||||||
|
|
||||||
|
Restore configuration files from a previous backup. Without arguments, lists available backups. With a backup ID, restores files from that backup.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
- `$ARGUMENTS` may contain a backup ID (format: `YYYYMMDD_HHMMSS`)
|
||||||
|
|
||||||
|
## Behavior
|
||||||
|
|
||||||
|
### List mode (no argument)
|
||||||
|
|
||||||
|
List available backups from `~/.claude/config-audit/backups/`:
|
||||||
|
|
||||||
|
```
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
Available Backups
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
1. 20260403_163045 — 3 files (settings.json, hooks.json, typescript.md)
|
||||||
|
2. 20260403_141230 — 1 file (CLAUDE.md)
|
||||||
|
3. 20260402_092015 — 5 files (full audit)
|
||||||
|
|
||||||
|
Usage: /config-audit rollback 20260403_163045
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
```
|
||||||
|
|
||||||
|
Read each backup's `manifest.yaml` to extract file list and timestamps.
|
||||||
|
|
||||||
|
### Restore mode (with backup ID)
|
||||||
|
|
||||||
|
1. Read manifest from `~/.claude/config-audit/backups/{backup-id}/manifest.yaml`
|
||||||
|
2. Show files that will be restored — ask for confirmation:
|
||||||
|
```
|
||||||
|
AskUserQuestion:
|
||||||
|
question: "Restore 3 files from backup 20260403_163045?"
|
||||||
|
options:
|
||||||
|
- "Yes, restore"
|
||||||
|
- "Cancel"
|
||||||
|
```
|
||||||
|
3. For each file in manifest:
|
||||||
|
a. Read backup file from `~/.claude/config-audit/backups/{backup-id}/files/{safeName}`
|
||||||
|
b. Write to original path
|
||||||
|
c. Verify checksum matches manifest
|
||||||
|
4. Show result:
|
||||||
|
```
|
||||||
|
Restored 3 files from backup 20260403_163045
|
||||||
|
- .claude/settings.json (checksum verified)
|
||||||
|
- hooks/hooks.json (checksum verified)
|
||||||
|
- .claude/rules/typescript.md (checksum verified)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Delete mode
|
||||||
|
|
||||||
|
If user says "delete" after listing, confirm and remove the backup directory.
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
Use the backup and rollback libraries directly:
|
||||||
|
```javascript
|
||||||
|
import { listBackups, restoreBackup, deleteBackup } from '../scanners/rollback-engine.mjs';
|
||||||
|
import { parseManifest } from '../scanners/lib/backup.mjs';
|
||||||
|
```
|
||||||
|
|
||||||
|
Or via Bash:
|
||||||
|
```bash
|
||||||
|
# List backups
|
||||||
|
ls -1 ~/.claude/config-audit/backups/
|
||||||
|
|
||||||
|
# Read manifest
|
||||||
|
cat ~/.claude/config-audit/backups/{id}/manifest.yaml
|
||||||
|
|
||||||
|
# Restore (copy back)
|
||||||
|
cp ~/.claude/config-audit/backups/{id}/files/{safeName} {originalPath}
|
||||||
|
```
|
||||||
114
plugins/config-audit/commands/status.md
Normal file
114
plugins/config-audit/commands/status.md
Normal file
|
|
@ -0,0 +1,114 @@
|
||||||
|
---
|
||||||
|
name: config-audit:status
|
||||||
|
description: Show current session state and available actions
|
||||||
|
allowed-tools: Read, Glob
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
# Config-Audit: Status
|
||||||
|
|
||||||
|
Display current session state and guide next actions.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/config-audit status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
1. **Find active session**:
|
||||||
|
```
|
||||||
|
Glob: ~/.claude/config-audit/sessions/*/state.yaml
|
||||||
|
Sort by modification time
|
||||||
|
Use most recent
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Read session state**:
|
||||||
|
```yaml
|
||||||
|
session_id: "20250126_143022"
|
||||||
|
current_phase: "analyze"
|
||||||
|
completed_phases: ["discover", "analyze"]
|
||||||
|
next_phase: "interview"
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Display status**:
|
||||||
|
```
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
Config-Audit Session Status
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
Session: 20250126_143022
|
||||||
|
Started: 2025-01-26 14:30:22
|
||||||
|
|
||||||
|
PHASE PROGRESS
|
||||||
|
──────────────
|
||||||
|
✓ Phase 1: Discover - 15 files found (current directory)
|
||||||
|
✓ Phase 2: Analyze - report generated
|
||||||
|
○ Phase 3: Interview - not started (optional)
|
||||||
|
○ Phase 4: Plan - not started
|
||||||
|
○ Phase 5: Implement - not started
|
||||||
|
|
||||||
|
NEXT ACTION
|
||||||
|
───────────
|
||||||
|
Run: /config-audit interview
|
||||||
|
Or: /config-audit plan (skip interview)
|
||||||
|
|
||||||
|
SESSION FILES
|
||||||
|
─────────────
|
||||||
|
Scope: ~/.claude/config-audit/sessions/20250126_143022/scope.yaml
|
||||||
|
Findings: ~/.claude/config-audit/sessions/20250126_143022/findings/
|
||||||
|
Report: ~/.claude/config-audit/sessions/20250126_143022/analysis-report.md
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **If no session found**:
|
||||||
|
```
|
||||||
|
No active config-audit session found.
|
||||||
|
|
||||||
|
Start a new audit with:
|
||||||
|
/config-audit # Full audit with auto-scope
|
||||||
|
/config-audit discover # Discovery phase only
|
||||||
|
```
|
||||||
|
|
||||||
|
## Session Information
|
||||||
|
|
||||||
|
Display based on completed phases:
|
||||||
|
|
||||||
|
| Phase | Info to Display |
|
||||||
|
|-------|-----------------|
|
||||||
|
| scope | Scope type, paths to scan |
|
||||||
|
| discover | Files found count, issues count |
|
||||||
|
| analyze | Conflicts, duplicates, opportunities |
|
||||||
|
| interview | Preferences summary |
|
||||||
|
| plan | Actions count, risk level |
|
||||||
|
| implement | Success/fail counts, backup location |
|
||||||
|
|
||||||
|
## List All Sessions
|
||||||
|
|
||||||
|
With `all` flag:
|
||||||
|
```
|
||||||
|
/config-audit status all
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows:
|
||||||
|
```
|
||||||
|
All config-audit sessions:
|
||||||
|
|
||||||
|
| Session | Phase | Created |
|
||||||
|
|---------|-------|---------|
|
||||||
|
| 20250126_143022 | analyze | 2025-01-26 14:30 |
|
||||||
|
| 20250125_091500 | complete | 2025-01-25 09:15 |
|
||||||
|
| 20250120_160000 | implement | 2025-01-20 16:00 |
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resume Session
|
||||||
|
|
||||||
|
If multiple sessions exist:
|
||||||
|
```
|
||||||
|
/config-audit resume {session-id}
|
||||||
|
```
|
||||||
|
|
||||||
|
Sets that session as active and continues from last phase.
|
||||||
1
plugins/config-audit/examples/minimal-setup/CLAUDE.md
Normal file
1
plugins/config-audit/examples/minimal-setup/CLAUDE.md
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
# My Project
|
||||||
17
plugins/config-audit/examples/minimal-setup/README.md
Normal file
17
plugins/config-audit/examples/minimal-setup/README.md
Normal file
|
|
@ -0,0 +1,17 @@
|
||||||
|
# Minimal Setup Example
|
||||||
|
|
||||||
|
This example demonstrates a bare-minimum Claude Code project — just a single-line CLAUDE.md with no other configuration.
|
||||||
|
|
||||||
|
## What to expect
|
||||||
|
|
||||||
|
Running `node ../../scanners/posture.mjs .` from this directory will show:
|
||||||
|
|
||||||
|
- **Low utilization score** — most features are unused
|
||||||
|
- **Low maturity level** — no hooks, no rules, no settings
|
||||||
|
- **Multiple feature gap findings** — all tiers flagged
|
||||||
|
|
||||||
|
## Why this matters
|
||||||
|
|
||||||
|
Even a single CLAUDE.md file is enough for Claude Code to work. But without permissions, hooks, rules, or MCP configuration, you're leaving significant capability on the table.
|
||||||
|
|
||||||
|
Compare with the [optimal-setup](../optimal-setup/) example to see what a fully-configured project looks like.
|
||||||
|
|
@ -0,0 +1,5 @@
|
||||||
|
{
|
||||||
|
"name": "optimal-project",
|
||||||
|
"description": "Example project demonstrating optimal Claude Code configuration",
|
||||||
|
"version": "1.0.0"
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,15 @@
|
||||||
|
---
|
||||||
|
name: review-agent
|
||||||
|
description: |
|
||||||
|
Code review agent that checks for style violations,
|
||||||
|
potential bugs, and test coverage gaps.
|
||||||
|
model: sonnet
|
||||||
|
color: green
|
||||||
|
isolation: worktree
|
||||||
|
tools: ["Read", "Glob", "Grep"]
|
||||||
|
---
|
||||||
|
|
||||||
|
Review the specified files for:
|
||||||
|
1. Style violations per code-style rules
|
||||||
|
2. Missing error handling
|
||||||
|
3. Untested code paths
|
||||||
|
|
@ -0,0 +1,12 @@
|
||||||
|
---
|
||||||
|
name: build
|
||||||
|
description: Build the project with current branch context
|
||||||
|
argument-hint: "[--watch]"
|
||||||
|
allowed-tools: Bash, Read
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
|
||||||
|
Current branch: !`git branch --show-current`
|
||||||
|
Status: !`git status --short`
|
||||||
|
|
||||||
|
Build the project. If --watch is specified, run in watch mode.
|
||||||
|
|
@ -0,0 +1,6 @@
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"key": "shift+enter",
|
||||||
|
"command": "chat:newline"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
paths: "src/**/*.ts"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Code Style
|
||||||
|
|
||||||
|
- Use explicit return types on all exported functions
|
||||||
|
- Prefer `const` over `let`
|
||||||
|
- No `any` types — use `unknown` and narrow
|
||||||
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
paths: "tests/**/*"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Testing Conventions
|
||||||
|
|
||||||
|
- Use `describe`/`it` blocks with clear names
|
||||||
|
- One assertion per test where practical
|
||||||
|
- Mock external services, not internal modules
|
||||||
|
|
@ -0,0 +1,31 @@
|
||||||
|
{
|
||||||
|
"$schema": "https://cdn.anthropic.com/schemas/claude-code/settings.schema.json",
|
||||||
|
"model": "sonnet",
|
||||||
|
"permissions": {
|
||||||
|
"allow": [
|
||||||
|
"Read",
|
||||||
|
"Glob",
|
||||||
|
"Grep",
|
||||||
|
"Bash(npm test)",
|
||||||
|
"Bash(npm run build)"
|
||||||
|
],
|
||||||
|
"deny": [
|
||||||
|
"Bash(rm -rf *)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"statusLine": {
|
||||||
|
"enabled": true
|
||||||
|
},
|
||||||
|
"outputStyle": "concise",
|
||||||
|
"worktree": {
|
||||||
|
"symlinkDirectories": [
|
||||||
|
"node_modules"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"autoMode": {
|
||||||
|
"enabled": false
|
||||||
|
},
|
||||||
|
"env": {
|
||||||
|
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
|
||||||
|
}
|
||||||
|
}
|
||||||
8
plugins/config-audit/examples/optimal-setup/.lsp.json
Normal file
8
plugins/config-audit/examples/optimal-setup/.lsp.json
Normal file
|
|
@ -0,0 +1,8 @@
|
||||||
|
{
|
||||||
|
"servers": {
|
||||||
|
"typescript": {
|
||||||
|
"command": "typescript-language-server",
|
||||||
|
"args": ["--stdio"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
9
plugins/config-audit/examples/optimal-setup/.mcp.json
Normal file
9
plugins/config-audit/examples/optimal-setup/.mcp.json
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-filesystem", "."],
|
||||||
|
"trust": "local"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
33
plugins/config-audit/examples/optimal-setup/CLAUDE.md
Normal file
33
plugins/config-audit/examples/optimal-setup/CLAUDE.md
Normal file
|
|
@ -0,0 +1,33 @@
|
||||||
|
# Optimal Project
|
||||||
|
|
||||||
|
A fully-configured Claude Code project demonstrating best practices.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This project uses TypeScript with a standard src/tests layout. All configuration follows Claude Code best practices for permissions, hooks, rules, and tooling.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `/build` | Build the project with status context |
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
src/ # Application source (TypeScript)
|
||||||
|
tests/ # Test files
|
||||||
|
.claude/ # Claude Code configuration
|
||||||
|
hooks/ # Git and Claude hooks
|
||||||
|
```
|
||||||
|
|
||||||
|
## Code Standards
|
||||||
|
|
||||||
|
- TypeScript strict mode
|
||||||
|
- ESLint + Prettier
|
||||||
|
- 80% test coverage minimum
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- Run `npm install` before first use
|
||||||
|
- Tests require Node.js 18+
|
||||||
38
plugins/config-audit/examples/optimal-setup/README.md
Normal file
38
plugins/config-audit/examples/optimal-setup/README.md
Normal file
|
|
@ -0,0 +1,38 @@
|
||||||
|
# Optimal Setup Example
|
||||||
|
|
||||||
|
This example demonstrates a fully-configured Claude Code project that scores A on config-audit's posture assessment.
|
||||||
|
|
||||||
|
## What's configured
|
||||||
|
|
||||||
|
| Feature | File | Gap Check |
|
||||||
|
|---------|------|-----------|
|
||||||
|
| Project instructions | `CLAUDE.md` | t1_1 |
|
||||||
|
| Permissions | `.claude/settings.json` | t1_2 |
|
||||||
|
| Hooks (3 events) | `hooks/hooks.json` | t1_3, t2_5 |
|
||||||
|
| Custom commands | `.claude/commands/build.md` | t1_4 |
|
||||||
|
| MCP servers | `.mcp.json` | t1_5, t4_1 |
|
||||||
|
| Multi-scope settings | `.claude/settings.local.json` | t2_1 |
|
||||||
|
| Modular rules | `.claude/rules/` | t2_2 |
|
||||||
|
| Path-scoped rules | `code-style.md`, `testing.md` | t2_3 |
|
||||||
|
| Custom agents | `.claude/agents/review-agent.md` | t2_6 |
|
||||||
|
| Model config | `settings.json` (model key) | t2_7 |
|
||||||
|
| Status line | `settings.json` (statusLine) | t3_1 |
|
||||||
|
| Custom keybindings | `.claude/keybindings.json` | t3_2 |
|
||||||
|
| Output style | `settings.json` (outputStyle) | t3_3 |
|
||||||
|
| Worktree config | `settings.json` (worktree) | t3_4 |
|
||||||
|
| Advanced skill frontmatter | `build.md` (argument-hint) | t3_5 |
|
||||||
|
| Agent isolation | `review-agent.md` (worktree) | t3_6 |
|
||||||
|
| Dynamic context | `build.md` (!`git ...`) | t3_7 |
|
||||||
|
| Auto mode | `settings.json` (autoMode) | t3_8 |
|
||||||
|
| Plugin manifest | `.claude-plugin/plugin.json` | t4_2 |
|
||||||
|
| Agent teams | `settings.json` (env) | t4_3 |
|
||||||
|
| LSP config | `.lsp.json` | t4_5 |
|
||||||
|
|
||||||
|
## How to test
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd examples/optimal-setup
|
||||||
|
node ../../scanners/posture.mjs .
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: A-grade score with high utilization across all tiers.
|
||||||
38
plugins/config-audit/examples/optimal-setup/hooks/hooks.json
Normal file
38
plugins/config-audit/examples/optimal-setup/hooks/hooks.json
Normal file
|
|
@ -0,0 +1,38 @@
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Bash",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "echo 'Pre-tool check passed'",
|
||||||
|
"timeout": 5000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"PostToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Write|Edit",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "echo 'Post-tool verification passed'",
|
||||||
|
"timeout": 5000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Stop": [
|
||||||
|
{
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "prompt",
|
||||||
|
"prompt": "Remember to commit your changes before ending the session."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
27
plugins/config-audit/examples/run-demo.sh
Executable file
27
plugins/config-audit/examples/run-demo.sh
Executable file
|
|
@ -0,0 +1,27 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Demo: run config-audit scanners on the example projects
|
||||||
|
# Usage: bash examples/run-demo.sh (from plugin root)
|
||||||
|
# or: cd examples && bash run-demo.sh
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
if [ -d "$SCRIPT_DIR/../scanners" ]; then
|
||||||
|
SCANNER_DIR="$(cd "$SCRIPT_DIR/../scanners" && pwd)"
|
||||||
|
else
|
||||||
|
SCANNER_DIR=""
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -z "$SCANNER_DIR" ] || [ ! -f "$SCANNER_DIR/posture.mjs" ]; then
|
||||||
|
echo "Error: Cannot find scanners/posture.mjs"
|
||||||
|
echo "Run from plugin root: bash examples/run-demo.sh"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "=== Minimal Setup (expect low score) ==="
|
||||||
|
echo ""
|
||||||
|
node "$SCANNER_DIR/posture.mjs" "$SCRIPT_DIR/minimal-setup/"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo ""
|
||||||
|
echo "=== Optimal Setup (expect high score) ==="
|
||||||
|
echo ""
|
||||||
|
node "$SCANNER_DIR/posture.mjs" "$SCRIPT_DIR/optimal-setup/"
|
||||||
50
plugins/config-audit/hooks/hooks.json
Normal file
50
plugins/config-audit/hooks/hooks.json
Normal file
|
|
@ -0,0 +1,50 @@
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Edit|Write",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "node ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/auto-backup-config.mjs",
|
||||||
|
"timeout": 5000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"PostToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Edit|Write",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "node ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/post-edit-verify.mjs",
|
||||||
|
"timeout": 10000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"SessionStart": [
|
||||||
|
{
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "node ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/session-start.mjs",
|
||||||
|
"timeout": 5000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Stop": [
|
||||||
|
{
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "node ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/stop-session-reminder.mjs",
|
||||||
|
"timeout": 5000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
84
plugins/config-audit/hooks/scripts/auto-backup-config.mjs
Normal file
84
plugins/config-audit/hooks/scripts/auto-backup-config.mjs
Normal file
|
|
@ -0,0 +1,84 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
/**
|
||||||
|
* PreToolUse hook: auto-backup config files before Edit/Write.
|
||||||
|
* Reads $TOOL_INPUT to check if the target file is a config file.
|
||||||
|
* If yes, backs it up via scanners/lib/backup.mjs.
|
||||||
|
* Fast path — no scanner execution.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { existsSync } from 'node:fs';
|
||||||
|
import { basename, dirname, sep } from 'node:path';
|
||||||
|
|
||||||
|
// Config file patterns to protect
|
||||||
|
const CONFIG_PATTERNS = [
|
||||||
|
/CLAUDE\.md$/i,
|
||||||
|
/CLAUDE\.local\.md$/i,
|
||||||
|
/settings\.json$/,
|
||||||
|
/settings\.local\.json$/,
|
||||||
|
/hooks\.json$/,
|
||||||
|
/\.mcp\.json$/,
|
||||||
|
/keybindings\.json$/,
|
||||||
|
];
|
||||||
|
|
||||||
|
const CONFIG_DIRS = ['rules'];
|
||||||
|
|
||||||
|
function isConfigFile(filePath) {
|
||||||
|
if (!filePath) return false;
|
||||||
|
const name = basename(filePath);
|
||||||
|
const dir = dirname(filePath);
|
||||||
|
|
||||||
|
// Check filename patterns
|
||||||
|
for (const pattern of CONFIG_PATTERNS) {
|
||||||
|
if (pattern.test(name)) return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if inside a rules/ directory
|
||||||
|
for (const d of CONFIG_DIRS) {
|
||||||
|
if (dir.includes(`${sep}${d}${sep}`) || dir.endsWith(`${sep}${d}`)) {
|
||||||
|
if (name.endsWith('.md')) return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Read all data from stdin asynchronously.
|
||||||
|
* @returns {Promise<string>}
|
||||||
|
*/
|
||||||
|
function readStdin() {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const chunks = [];
|
||||||
|
process.stdin.setEncoding('utf-8');
|
||||||
|
process.stdin.on('data', chunk => chunks.push(chunk));
|
||||||
|
process.stdin.on('end', () => resolve(chunks.join('')));
|
||||||
|
process.stdin.on('error', reject);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
let input;
|
||||||
|
try {
|
||||||
|
input = await readStdin();
|
||||||
|
} catch {
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
let toolInput;
|
||||||
|
try {
|
||||||
|
toolInput = JSON.parse(input);
|
||||||
|
} catch {
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
const filePath = toolInput.file_path || toolInput.path;
|
||||||
|
if (!filePath || !isConfigFile(filePath) || !existsSync(filePath)) {
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
const { createBackup } = await import('../../scanners/lib/backup.mjs');
|
||||||
|
const { backupPath } = createBackup([filePath]);
|
||||||
|
process.stderr.write(`[config-audit] Auto-backup: ${basename(filePath)} → ${backupPath}\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(() => process.exit(0));
|
||||||
18
plugins/config-audit/hooks/scripts/backup-before-change.mjs
Normal file
18
plugins/config-audit/hooks/scripts/backup-before-change.mjs
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
// Backup script for config-audit plugin
|
||||||
|
// Creates timestamped backups of config files before modification
|
||||||
|
// Usage: node backup-before-change.mjs <file1> [file2] ...
|
||||||
|
|
||||||
|
import { createBackup } from '../../scanners/lib/backup.mjs';
|
||||||
|
|
||||||
|
const files = process.argv.slice(2);
|
||||||
|
|
||||||
|
if (files.length === 0) {
|
||||||
|
process.stderr.write('Usage: node backup-before-change.mjs <file1> [file2] ...\n');
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const { backupId, backupPath } = createBackup(files);
|
||||||
|
|
||||||
|
console.log(`Backup complete: ${backupPath}`);
|
||||||
|
console.log(backupPath);
|
||||||
191
plugins/config-audit/hooks/scripts/post-edit-verify.mjs
Normal file
191
plugins/config-audit/hooks/scripts/post-edit-verify.mjs
Normal file
|
|
@ -0,0 +1,191 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
/**
|
||||||
|
* PostToolUse hook: verify config files after Edit/Write.
|
||||||
|
* Runs the relevant single scanner on the edited file.
|
||||||
|
* Blocks if new critical/high findings are introduced.
|
||||||
|
* Timeout: 10 seconds (runs one scanner, not all 8).
|
||||||
|
* Graceful degradation: returns {} (allow) on any error.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { existsSync, readFileSync, writeFileSync } from 'node:fs';
|
||||||
|
import { basename, dirname, resolve, sep } from 'node:path';
|
||||||
|
import { createHash } from 'node:crypto';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
|
||||||
|
// Config file patterns (shared with auto-backup-config.mjs)
|
||||||
|
const CONFIG_PATTERNS = [
|
||||||
|
{ pattern: /CLAUDE\.md$/i, scanner: 'CML' },
|
||||||
|
{ pattern: /CLAUDE\.local\.md$/i, scanner: 'CML' },
|
||||||
|
{ pattern: /settings\.json$/, scanner: 'SET' },
|
||||||
|
{ pattern: /settings\.local\.json$/, scanner: 'SET' },
|
||||||
|
{ pattern: /hooks\.json$/, scanner: 'HKV' },
|
||||||
|
{ pattern: /\.mcp\.json$/, scanner: 'MCP' },
|
||||||
|
];
|
||||||
|
|
||||||
|
const RULES_DIR_PATTERN = /[/\\]rules[/\\]/;
|
||||||
|
|
||||||
|
function detectScanner(filePath) {
|
||||||
|
if (!filePath) return null;
|
||||||
|
const name = basename(filePath);
|
||||||
|
const dir = dirname(filePath);
|
||||||
|
|
||||||
|
for (const { pattern, scanner } of CONFIG_PATTERNS) {
|
||||||
|
if (pattern.test(name)) return scanner;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rules directory
|
||||||
|
if ((RULES_DIR_PATTERN.test(dir) || dir.endsWith(`${sep}rules`)) && name.endsWith('.md')) {
|
||||||
|
return 'RUL';
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
function getCacheKey(filePath) {
|
||||||
|
const hash = createHash('md5').update(filePath).digest('hex').slice(0, 8);
|
||||||
|
return resolve(tmpdir(), `config-audit-last-scan-${hash}.json`);
|
||||||
|
}
|
||||||
|
|
||||||
|
function loadPreviousScan(cacheFile) {
|
||||||
|
try {
|
||||||
|
if (existsSync(cacheFile)) {
|
||||||
|
return JSON.parse(readFileSync(cacheFile, 'utf-8'));
|
||||||
|
}
|
||||||
|
} catch { /* ignore */ }
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
function saveScanResult(cacheFile, result) {
|
||||||
|
try {
|
||||||
|
writeFileSync(cacheFile, JSON.stringify(result), 'utf-8');
|
||||||
|
} catch { /* ignore */ }
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Read all data from stdin asynchronously.
|
||||||
|
* @returns {Promise<string>}
|
||||||
|
*/
|
||||||
|
function readStdin() {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const chunks = [];
|
||||||
|
process.stdin.setEncoding('utf-8');
|
||||||
|
process.stdin.on('data', chunk => chunks.push(chunk));
|
||||||
|
process.stdin.on('end', () => resolve(chunks.join('')));
|
||||||
|
process.stdin.on('error', reject);
|
||||||
|
// Safety: if no data arrives within 2s, resolve with empty string
|
||||||
|
setTimeout(() => resolve(chunks.join('')), 2000);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function allow() {
|
||||||
|
process.stdout.write('{}');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Walk up from filePath to find a likely project root.
|
||||||
|
*/
|
||||||
|
function findProjectRoot(fp) {
|
||||||
|
let dir = dirname(resolve(fp));
|
||||||
|
for (let i = 0; i < 10; i++) {
|
||||||
|
if (existsSync(resolve(dir, '.git')) || existsSync(resolve(dir, 'CLAUDE.md'))) {
|
||||||
|
return dir;
|
||||||
|
}
|
||||||
|
const parent = dirname(dir);
|
||||||
|
if (parent === dir) break;
|
||||||
|
dir = parent;
|
||||||
|
}
|
||||||
|
return dirname(resolve(fp));
|
||||||
|
}
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
// Read stdin
|
||||||
|
let raw;
|
||||||
|
try {
|
||||||
|
raw = await readStdin();
|
||||||
|
} catch {
|
||||||
|
allow();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse tool input
|
||||||
|
let toolInput;
|
||||||
|
try {
|
||||||
|
toolInput = JSON.parse(raw);
|
||||||
|
} catch {
|
||||||
|
allow();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const filePath = toolInput.file_path || toolInput.path;
|
||||||
|
const scannerType = detectScanner(filePath);
|
||||||
|
|
||||||
|
if (!scannerType || !filePath || !existsSync(filePath)) {
|
||||||
|
allow();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run the relevant scanner
|
||||||
|
const projectDir = findProjectRoot(filePath);
|
||||||
|
const { discoverConfigFiles } = await import('../../scanners/lib/file-discovery.mjs');
|
||||||
|
const { resetCounter } = await import('../../scanners/lib/output.mjs');
|
||||||
|
|
||||||
|
const scannerMap = {
|
||||||
|
CML: () => import('../../scanners/claude-md-linter.mjs'),
|
||||||
|
SET: () => import('../../scanners/settings-validator.mjs'),
|
||||||
|
HKV: () => import('../../scanners/hook-validator.mjs'),
|
||||||
|
RUL: () => import('../../scanners/rules-validator.mjs'),
|
||||||
|
MCP: () => import('../../scanners/mcp-config-validator.mjs'),
|
||||||
|
};
|
||||||
|
|
||||||
|
const loader = scannerMap[scannerType];
|
||||||
|
if (!loader) {
|
||||||
|
allow();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
resetCounter();
|
||||||
|
const { scan } = await loader();
|
||||||
|
const discovery = await discoverConfigFiles(projectDir, { includeGlobal: false });
|
||||||
|
const result = await scan(projectDir, discovery);
|
||||||
|
|
||||||
|
// Compare with previous scan
|
||||||
|
const cacheFile = getCacheKey(filePath);
|
||||||
|
const previous = loadPreviousScan(cacheFile);
|
||||||
|
|
||||||
|
// Save current result
|
||||||
|
saveScanResult(cacheFile, {
|
||||||
|
criticalCount: result.counts.critical || 0,
|
||||||
|
highCount: result.counts.high || 0,
|
||||||
|
findingCount: result.findings.length,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!previous) {
|
||||||
|
allow();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if new critical/high findings were introduced
|
||||||
|
const newCritical = (result.counts.critical || 0) - (previous.criticalCount || 0);
|
||||||
|
const newHigh = (result.counts.high || 0) - (previous.highCount || 0);
|
||||||
|
|
||||||
|
if (newCritical > 0 || newHigh > 0) {
|
||||||
|
const parts = [];
|
||||||
|
if (newCritical > 0) parts.push(`${newCritical} new critical`);
|
||||||
|
if (newHigh > 0) parts.push(`${newHigh} new high`);
|
||||||
|
|
||||||
|
const response = {
|
||||||
|
decision: 'block',
|
||||||
|
reason: `[config-audit] Edit introduced ${parts.join(' and ')} finding(s) in ${basename(filePath)}. Review with /config-audit posture`,
|
||||||
|
};
|
||||||
|
process.stdout.write(JSON.stringify(response));
|
||||||
|
} else {
|
||||||
|
process.stdout.write('{}');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(() => {
|
||||||
|
// Graceful degradation — always allow on error
|
||||||
|
process.stdout.write('{}');
|
||||||
|
process.exit(0);
|
||||||
|
});
|
||||||
57
plugins/config-audit/hooks/scripts/session-start.mjs
Normal file
57
plugins/config-audit/hooks/scripts/session-start.mjs
Normal file
|
|
@ -0,0 +1,57 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
// Check for active (incomplete) config-audit sessions on session start
|
||||||
|
// Non-blocking: always exits 0
|
||||||
|
|
||||||
|
import { readdirSync, readFileSync, existsSync } from 'fs';
|
||||||
|
import { join, basename } from 'path';
|
||||||
|
import { homedir } from 'os';
|
||||||
|
|
||||||
|
const sessionsDir = join(homedir(), '.config-audit', 'sessions');
|
||||||
|
|
||||||
|
if (!existsSync(sessionsDir)) {
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseYamlValue(content, key) {
|
||||||
|
const match = content.match(new RegExp(`${key}:\\s*"?([^"\\n]*)"?`));
|
||||||
|
return match ? match[1].trim() : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
const activeSessions = [];
|
||||||
|
|
||||||
|
try {
|
||||||
|
const entries = readdirSync(sessionsDir, { withFileTypes: true });
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!entry.isDirectory()) continue;
|
||||||
|
|
||||||
|
const stateFile = join(sessionsDir, entry.name, 'state.yaml');
|
||||||
|
if (!existsSync(stateFile)) continue;
|
||||||
|
|
||||||
|
const content = readFileSync(stateFile, 'utf-8');
|
||||||
|
const currentPhase = parseYamlValue(content, 'current_phase');
|
||||||
|
|
||||||
|
if (currentPhase && currentPhase !== 'verify' && currentPhase !== 'complete') {
|
||||||
|
const nextPhase = parseYamlValue(content, 'next_phase');
|
||||||
|
activeSessions.push({
|
||||||
|
id: entry.name,
|
||||||
|
phase: currentPhase,
|
||||||
|
next: nextPhase,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (activeSessions.length > 0) {
|
||||||
|
console.log(`config-audit: ${activeSessions.length} active session(s) found:`);
|
||||||
|
let lastNext = '';
|
||||||
|
for (const s of activeSessions) {
|
||||||
|
console.log(` - Session ${s.id}: phase=${s.phase}, next=${s.next}`);
|
||||||
|
lastNext = s.next;
|
||||||
|
}
|
||||||
|
console.log(` Resume with: /config-audit ${lastNext}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
process.exit(0);
|
||||||
66
plugins/config-audit/hooks/scripts/stop-session-reminder.mjs
Normal file
66
plugins/config-audit/hooks/scripts/stop-session-reminder.mjs
Normal file
|
|
@ -0,0 +1,66 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
// Remind about current config-audit session phase on session end
|
||||||
|
// Returns JSON: {} if no active session, systemMessage if active
|
||||||
|
|
||||||
|
import { readdirSync, readFileSync, statSync, existsSync } from 'fs';
|
||||||
|
import { join, basename, dirname } from 'path';
|
||||||
|
import { homedir } from 'os';
|
||||||
|
|
||||||
|
const sessionsDir = join(homedir(), '.config-audit', 'sessions');
|
||||||
|
|
||||||
|
if (!existsSync(sessionsDir)) {
|
||||||
|
console.log('{}');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseYamlValue(content, key) {
|
||||||
|
const match = content.match(new RegExp(`${key}:\\s*"?([^"\\n]*)"?`));
|
||||||
|
return match ? match[1].trim() : '';
|
||||||
|
}
|
||||||
|
|
||||||
|
let latestState = '';
|
||||||
|
let latestTime = 0;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const entries = readdirSync(sessionsDir, { withFileTypes: true });
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!entry.isDirectory()) continue;
|
||||||
|
|
||||||
|
const stateFile = join(sessionsDir, entry.name, 'state.yaml');
|
||||||
|
if (!existsSync(stateFile)) continue;
|
||||||
|
|
||||||
|
const fileTime = statSync(stateFile).mtimeMs;
|
||||||
|
if (fileTime > latestTime) {
|
||||||
|
latestTime = fileTime;
|
||||||
|
latestState = stateFile;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
console.log('{}');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (latestState) {
|
||||||
|
// Only remind if session was touched in the last 2 hours (active work)
|
||||||
|
const twoHoursMs = 2 * 60 * 60 * 1000;
|
||||||
|
if (Date.now() - latestTime > twoHoursMs) {
|
||||||
|
console.log('{}');
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = readFileSync(latestState, 'utf-8');
|
||||||
|
const currentPhase = parseYamlValue(content, 'current_phase');
|
||||||
|
const nextPhase = parseYamlValue(content, 'next_phase');
|
||||||
|
|
||||||
|
if (currentPhase && currentPhase !== 'verify' && currentPhase !== 'complete') {
|
||||||
|
const sessionId = basename(dirname(latestState));
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
systemMessage: `config-audit: Session ${sessionId} is at phase '${currentPhase}'. Next: /config-audit ${nextPhase}`
|
||||||
|
}));
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('{}');
|
||||||
|
process.exit(0);
|
||||||
44
plugins/config-audit/knowledge/anti-patterns.md
Normal file
44
plugins/config-audit/knowledge/anti-patterns.md
Normal file
|
|
@ -0,0 +1,44 @@
|
||||||
|
# Configuration Anti-Patterns
|
||||||
|
|
||||||
|
> 28 anti-patterns with detection IDs, severity, and fix. Mapped to scanner finding IDs where applicable.
|
||||||
|
|
||||||
|
| # | Pattern | Detection | Severity | Fix |
|
||||||
|
|---|---------|-----------|----------|-----|
|
||||||
|
| 1 | CLAUDE.md over 200 lines | CA-CML-001 | medium | Extract sections with `@import`. Split into domain-specific rule files in `.claude/rules/`. |
|
||||||
|
| 2 | No `@import` in CLAUDE.md over 100 lines | CA-CML-002 | low | Move large specs/docs to separate files, reference with `@path/to/file`. |
|
||||||
|
| 3 | No CLAUDE.local.md alongside CLAUDE.md | CA-CML-003 | low | Create `CLAUDE.local.md`, add to `.gitignore`. Move personal dev notes and sandbox URLs there. |
|
||||||
|
| 4 | Duplicate content in CLAUDE.md sections | CA-CML-004 | low | Deduplicate. If the same instruction appears in multiple sections, it suggests the file has grown without review. |
|
||||||
|
| 5 | TODO/FIXME comments in CLAUDE.md | CA-CML-005 | low | Remove stale TODOs or complete them. Unresolved TODOs add noise to every session. |
|
||||||
|
| 6 | Broken `@import` path in CLAUDE.md | CA-CML-006 | high | Verify the imported file exists at the referenced path. Broken imports silently drop content. |
|
||||||
|
| 7 | No section headers in CLAUDE.md | CA-CML-007 | medium | Add `##` section headers. Claude uses structure to navigate selectively; flat text loads entirely. |
|
||||||
|
| 8 | settings.json missing `$schema` | CA-SET-001 | low | Add `"$schema": "https://json.schemastore.org/claude-code-settings.json"` as first key. |
|
||||||
|
| 9 | Unknown or deprecated key in settings.json | CA-SET-002 | medium | Remove/replace. `includeCoAuthoredBy` is deprecated — use `attribution`. Unknown keys are silently ignored. |
|
||||||
|
| 10 | Type mismatch in settings.json | CA-SET-003 | high | Fix value type. E.g., `disableAllHooks` must be bool (`true`), not string (`"true"`). Wrong types are silently ignored. |
|
||||||
|
| 11 | No `permissions.deny` rules | CA-SET-004 | high | Add deny rules for `.env`, `secrets/`, credentials. Without them, Claude can read sensitive files. |
|
||||||
|
| 12 | No `permissions.allow` rules in active project | CA-SET-005 | medium | Pre-allow safe commands: `Bash(npm run *)`, `Bash(git log *)`. Reduces constant permission prompts. |
|
||||||
|
| 13 | `defaultMode` left at `"default"` for all projects | CA-SET-006 | low | Set `"defaultMode": "acceptEdits"` for development repos, `"plan"` for infrastructure/prod repos. |
|
||||||
|
| 14 | hooks.json as array instead of object | CA-HKV-001 | high | Convert to event-keyed object. `{"hooks": {"PreToolUse": [...]}}` not `{"hooks": [...]}`. Array format is silently ignored. |
|
||||||
|
| 15 | Hook script path not found | CA-HKV-002 | high | Verify script exists at referenced path. Use `${CLAUDE_PLUGIN_ROOT}` for plugin scripts to prevent path fragility. |
|
||||||
|
| 16 | Invalid event name in hooks.json | CA-HKV-003 | high | Use only valid event names: SessionStart, PreToolUse, PostToolUse, Stop, etc. Typos (e.g., `PreTool`) are ignored. |
|
||||||
|
| 17 | Hook timeout not set on long-running script | CA-HKV-004 | medium | Add `"timeout": 30000` (ms) for scripts that may take time. Default timeout may kill scripts prematurely. |
|
||||||
|
| 18 | hooks.json `matcher` as nested object | CA-HKV-005 | high | `"matcher"` must be a plain string (`"Bash"`), not `{"tool": "Bash"}`. Nested object format is never matched. |
|
||||||
|
| 19 | `"hooks"` key in plugin.json | CA-HKV-006 | medium | Remove from plugin.json. Hooks are auto-discovered from `hooks/hooks.json`. Declaring in plugin.json causes duplicate registration. |
|
||||||
|
| 20 | Rules file without `paths:` frontmatter | CA-RUL-001 | medium | Add `paths:` glob patterns. Without paths, the rule loads for every session regardless of file context. |
|
||||||
|
| 21 | Rules file glob doesn't match any project files | CA-RUL-002 | low | Fix the glob pattern. `src/**/*.ts` won't match `./src/file.ts` — test actual paths. |
|
||||||
|
| 22 | Deprecated frontmatter field in rules file | CA-RUL-003 | low | Remove/replace deprecated fields. Check official docs for current frontmatter schema. |
|
||||||
|
| 23 | `.claude/rules/` directory missing entirely | CA-RUL-004 | medium | Create directory and split CLAUDE.md by domain. Path-specific rules dramatically reduce context overhead. |
|
||||||
|
| 24 | MCP server with no trust level set | CA-MCP-001 | medium | Set `"trust": "workspace"` or `"trusted"` explicitly. Default is untrusted/sandboxed; may cause unexpected failures. |
|
||||||
|
| 25 | User MCP servers in project `.mcp.json` | CA-MCP-002 | low | Move personal MCP servers to `~/.claude.json`. Project `.mcp.json` is for servers the whole team needs. |
|
||||||
|
| 26 | No custom skills when team has repeated workflows | CA-GAP-001 | medium | Create skills for `/deploy`, `/review-pr`, `/fix-issue`. Repeated multi-step workflows are the target. |
|
||||||
|
| 27 | Custom agents without `description` field | CA-GAP-002 | medium | Add a description explaining when to delegate to this agent. Without it, Claude never auto-invokes it. |
|
||||||
|
| 28 | No hooks configured at all | CA-GAP-003 | high | Add at minimum a `Stop` hook for session summaries. Zero hooks is the most common high-value gap. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Severity Scale
|
||||||
|
|
||||||
|
| Severity | Meaning |
|
||||||
|
|----------|---------|
|
||||||
|
| high | Silent failure or security risk — config item is ignored OR sensitive data exposed |
|
||||||
|
| medium | Significant productivity loss or maintenance risk |
|
||||||
|
| low | Missed optimization; config works but suboptimally |
|
||||||
345
plugins/config-audit/knowledge/claude-code-capabilities.md
Normal file
345
plugins/config-audit/knowledge/claude-code-capabilities.md
Normal file
|
|
@ -0,0 +1,345 @@
|
||||||
|
# Claude Code Configuration Capabilities
|
||||||
|
|
||||||
|
> Source: Official Claude Code documentation (code.claude.com/docs), 75 pages, verified 2026-04-03.
|
||||||
|
|
||||||
|
## Official Configuration Guidance (Anthropic)
|
||||||
|
|
||||||
|
These principles are backed by official docs and verified community reports. Use them to ground recommendations.
|
||||||
|
|
||||||
|
### Core Architecture
|
||||||
|
|
||||||
|
- **CLAUDE.md is advisory, not enforced.** It's injected as user-message context — Claude reads it and tries to follow it, but there is no guarantee of strict compliance. Compliance depends on specificity and file length.
|
||||||
|
- **settings.json is the enforcement layer.** Permissions, sandbox rules, and tool grants are enforced by the client regardless of what Claude decides to do.
|
||||||
|
- **Hooks are deterministic.** Unlike CLAUDE.md instructions which are advisory, hooks guarantee the action happens every time with zero exceptions.
|
||||||
|
|
||||||
|
### Proven Impact
|
||||||
|
|
||||||
|
- **CLAUDE.md over 200 lines degrades adherence.** GitHub issue #22503 documents 300-line CLAUDE.md being "ignored 80+ times." Official docs now explicitly call this out: "important rules get lost in the noise."
|
||||||
|
- **Path-scoped rules reduce context noise.** Rules without `paths:` frontmatter load every session regardless of relevance. Scoped rules trigger only when Claude reads matching files.
|
||||||
|
- **Conflicting instructions cause arbitrary behavior.** When CLAUDE.md contains contradictions, Claude picks one arbitrarily. No priority mechanism resolves conflicts within a single CLAUDE.md.
|
||||||
|
- **System prompt takes precedence over CLAUDE.md.** Built-in system prompts (plan mode, agent launching) can override user-defined CLAUDE.md instructions.
|
||||||
|
|
||||||
|
### When Each Feature Is Relevant
|
||||||
|
|
||||||
|
| Feature | Relevant when... | Not needed when... |
|
||||||
|
|---------|-----------------|-------------------|
|
||||||
|
| permissions.deny | Sensitive files exist (.env, secrets/) | Fully trusted solo dev environment |
|
||||||
|
| hooks | Repeatable automation or safety checks needed | Occasional manual workflows |
|
||||||
|
| path-scoped rules | Multiple languages, contexts, or large codebase | Single-language, small project |
|
||||||
|
| MCP servers (.mcp.json in git) | Team shares tool access | Solo project, personal tools only |
|
||||||
|
| custom agents | Specialized parallel workflows | Linear single-task coding |
|
||||||
|
| custom skills | Repeated multi-step workflows | One-off tasks |
|
||||||
|
| CLAUDE.local.md | Personal preferences differ from team | Solo developer |
|
||||||
|
| model overrides | Different tasks need different cost/capability | Default model works for all tasks |
|
||||||
|
| output styles | Team has specific formatting needs | Default style is sufficient |
|
||||||
|
| managed settings | Organization-wide policy enforcement | No org, solo developer |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. CLAUDE.md — Project Memory
|
||||||
|
|
||||||
|
**What it is:** Markdown file injected into every session as user-message context.
|
||||||
|
|
||||||
|
| Scope | Location |
|
||||||
|
|-------|----------|
|
||||||
|
| Project (shared) | `./CLAUDE.md` or `./.claude/CLAUDE.md` |
|
||||||
|
| Project (personal) | `./CLAUDE.local.md` (gitignored) |
|
||||||
|
| User (all projects) | `~/.claude/CLAUDE.md` |
|
||||||
|
| Org-managed (macOS) | `/Library/Application Support/ClaudeCode/CLAUDE.md` |
|
||||||
|
| Org-managed (Linux) | `/etc/claude-code/CLAUDE.md` |
|
||||||
|
|
||||||
|
**Key features:** `@import` syntax inlines other files (max 5 hops); HTML comments `<!-- -->` stripped before injection (free maintainer notes); lazy loading of subdirectory files; `claudeMdExcludes` setting skips files by glob.
|
||||||
|
|
||||||
|
**Fully utilizing:** CLAUDE.md under 200 lines with clear headers; `@import` for large specs; `CLAUDE.local.md` for personal sandbox URLs; auto-memory enabled.
|
||||||
|
|
||||||
|
**Common gaps:** No `CLAUDE.local.md`; no `@imports` (one huge file); no user-level `~/.claude/CLAUDE.md`; file over 200 lines reducing adherence.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. CLAUDE.local.md — Personal Project Config
|
||||||
|
|
||||||
|
**What it is:** Companion to CLAUDE.md; appended after it; gitignored by default.
|
||||||
|
|
||||||
|
**Config location:** `./CLAUDE.local.md` (project root)
|
||||||
|
|
||||||
|
**Key fields/options:** Free-form markdown, same syntax as CLAUDE.md. Ideal for personal API keys, sandbox URLs, local dev notes.
|
||||||
|
|
||||||
|
**Fully utilizing:** Personal overrides that shouldn't be committed; local tool paths; developer-specific preferences.
|
||||||
|
|
||||||
|
**Common gaps:** File never created; personal preferences mixed into shared CLAUDE.md.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. ~/.claude/CLAUDE.md — User-Level Memory
|
||||||
|
|
||||||
|
**What it is:** Loaded for every project; lower precedence than project CLAUDE.md.
|
||||||
|
|
||||||
|
**Config location:** `~/.claude/CLAUDE.md`
|
||||||
|
|
||||||
|
**Key fields/options:** Same markdown as project CLAUDE.md; `@import` supported.
|
||||||
|
|
||||||
|
**Fully utilizing:** Personal coding style, preferred tools, communication preferences applied everywhere.
|
||||||
|
|
||||||
|
**Common gaps:** File never created; duplicating same instructions in every project CLAUDE.md.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. User Rules — ~/.claude/rules/
|
||||||
|
|
||||||
|
**What it is:** Personal rules files that load based on path patterns.
|
||||||
|
|
||||||
|
**Config location:** `~/.claude/rules/*.md`
|
||||||
|
|
||||||
|
**Key fields/options:** YAML frontmatter `paths:` field with glob patterns — file loads only when Claude works on matching paths. Symlinks supported. Recursive discovery.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
paths: ["src/**/*.ts"]
|
||||||
|
---
|
||||||
|
# TypeScript-specific rules here
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fully utilizing:** Separate rule files per language, per domain, per tool; prevents irrelevant rules loading.
|
||||||
|
|
||||||
|
**Common gaps:** No `~/.claude/rules/` at all; everything in one CLAUDE.md that always loads.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Project Rules — .claude/rules/
|
||||||
|
|
||||||
|
**What it is:** Project-scoped rules with path-specific activation.
|
||||||
|
|
||||||
|
**Config location:** `./.claude/rules/*.md`
|
||||||
|
|
||||||
|
**Key fields/options:** Same `paths:` frontmatter as user rules. Committed to git for team sharing.
|
||||||
|
|
||||||
|
**Fully utilizing:** TypeScript rules only load for `.ts` files; migration rules only load for `db/**` paths; test rules only load for `**/*.test.*`.
|
||||||
|
|
||||||
|
**Common gaps:** No `.claude/rules/` directory; path-specific rules not used; all rules always load.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Org-Managed CLAUDE.md
|
||||||
|
|
||||||
|
**What it is:** Org-controlled instructions that cannot be overridden by users.
|
||||||
|
|
||||||
|
**Config locations:** `/Library/Application Support/ClaudeCode/CLAUDE.md` (macOS), `/etc/claude-code/CLAUDE.md` (Linux), `C:\Program Files\ClaudeCode\CLAUDE.md` (Windows)
|
||||||
|
|
||||||
|
**Fully utilizing:** Org-wide security policies, required compliance notes, standard workflow rules.
|
||||||
|
|
||||||
|
**Common gaps:** Not used in org deployments; individual teams manage their own configs without coordination.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Project settings.json
|
||||||
|
|
||||||
|
**What it is:** Project-level settings committed to git; shared with team.
|
||||||
|
|
||||||
|
**Config location:** `./.claude/settings.json`
|
||||||
|
|
||||||
|
**Key fields:** `permissions.allow/deny/ask`, `env`, `hooks`, `model`, `effortLevel`, `attribution`, `enabledPlugins`, `enableAllProjectMcpServers`
|
||||||
|
|
||||||
|
**Fully utilizing:** Team-agreed allow/deny rules; project env vars; attribution config; plugin list for team.
|
||||||
|
|
||||||
|
**Common gaps:** File doesn't exist; no permissions configured; no env block; missing `$schema` line.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. User settings.json
|
||||||
|
|
||||||
|
**What it is:** Personal settings applied to all projects.
|
||||||
|
|
||||||
|
**Config location:** `~/.claude/settings.json`
|
||||||
|
|
||||||
|
**Key fields:** `model`, `effortLevel`, `outputStyle`, `language`, `statusLine`, `autoMemoryEnabled`, `autoMemoryDirectory`, `hooks`, `defaultShell`, `voiceEnabled`, `editorMode` (in `~/.claude.json`)
|
||||||
|
|
||||||
|
**Fully utilizing:** Personal model preference; default effort level; status line config; user-level hooks.
|
||||||
|
|
||||||
|
**Common gaps:** File never touched; relying on project settings only; no personal preferences set.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Local settings.json
|
||||||
|
|
||||||
|
**What it is:** Per-project personal overrides; gitignored.
|
||||||
|
|
||||||
|
**Config location:** `./.claude/settings.local.json`
|
||||||
|
|
||||||
|
**Key fields:** Same as project settings.json; `autoMode` classifier (user/local settings only, not project).
|
||||||
|
|
||||||
|
**Fully utilizing:** Local dev API endpoints; personal permission overrides; local-only env vars.
|
||||||
|
|
||||||
|
**Common gaps:** Never created; personal overrides committed to shared settings.json.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Managed Settings
|
||||||
|
|
||||||
|
**What it is:** Org-controlled settings at highest precedence; cannot be overridden.
|
||||||
|
|
||||||
|
**Config locations:** `managed-settings.json` in system dirs; `managed-settings.d/*.json` (alphabetical merge)
|
||||||
|
|
||||||
|
**Key fields (managed-only):** `allowedMcpServers`, `deniedMcpServers`, `allowManagedMcpServersOnly`, `allowManagedHooksOnly`, `allowManagedPermissionRulesOnly`, `allowedChannelPlugins`, `blockedMarketplaces`, `strictKnownMarketplaces`, `pluginTrustMessage`
|
||||||
|
|
||||||
|
**Fully utilizing:** Lock MCP servers to org-approved list; enforce hook policies; org announcements via `companyAnnouncements`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 11. .mcp.json — Project MCP Config
|
||||||
|
|
||||||
|
**What it is:** Project-level MCP server configuration; committed to git.
|
||||||
|
|
||||||
|
**Config location:** `./.mcp.json`
|
||||||
|
|
||||||
|
**Key fields:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"name": {
|
||||||
|
"type": "stdio|http",
|
||||||
|
"command": "...", "args": [...],
|
||||||
|
"url": "...",
|
||||||
|
"env": {}, "timeout": 30000, "trust": "workspace|trusted|untrusted"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fully utilizing:** Team-shared MCP servers (GitHub, Jira, DBs); MCP resources via `@server:path`; MCP prompts as slash commands; `enableAllProjectMcpServers: true` for zero-friction team onboarding.
|
||||||
|
|
||||||
|
**Common gaps:** No `.mcp.json`; MCP only configured in `~/.claude.json` (not shared); trust levels not set; MCP resources not used.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 12. ~/.claude.json — Global Config
|
||||||
|
|
||||||
|
**What it is:** Global non-settings preferences (separate file from settings.json).
|
||||||
|
|
||||||
|
**Config location:** `~/.claude.json`
|
||||||
|
|
||||||
|
**Key fields:** `mcpServers` (user-scope MCP), `autoConnectIde`, `autoInstallIdeExtension`, `editorMode` ("normal"/"vim"), `showTurnDuration`, `terminalProgressBarEnabled`, `teammateMode` ("auto"/"in-process"/"tmux")
|
||||||
|
|
||||||
|
**Fully utilizing:** User-level MCP servers; vim mode enabled; IDE auto-connect.
|
||||||
|
|
||||||
|
**Common gaps:** MCP servers configured per-project instead of here when they should be global; editorMode never set.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 13. managed-mcp.json — Org MCP Config
|
||||||
|
|
||||||
|
**What it is:** Org-managed MCP servers deployed to all users.
|
||||||
|
|
||||||
|
**Config locations:** System directories (same as managed-settings.json).
|
||||||
|
|
||||||
|
**Key fields:** Same `mcpServers` format as `.mcp.json`.
|
||||||
|
|
||||||
|
**Fully utilizing:** Org-wide MCP servers (internal APIs, knowledge bases) available everywhere.
|
||||||
|
|
||||||
|
**Common gaps:** Not deployed in org setups; teams configure MCP independently.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 14. keybindings.json
|
||||||
|
|
||||||
|
**What it is:** Custom keyboard shortcuts for Claude Code UI.
|
||||||
|
|
||||||
|
**Config location:** `~/.claude/keybindings.json` (open with `/keybindings`)
|
||||||
|
|
||||||
|
**Key fields:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"$schema": "https://www.schemastore.org/claude-code-keybindings.json",
|
||||||
|
"bindings": [{"context": "Chat", "bindings": {"shift+enter": "chat:newline"}}]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key actions:** `chat:submit`, `chat:newline`, `chat:externalEditor`, `chat:cycleMode`, `chat:thinkingToggle`, `chat:fastMode`, `voice:pushToTalk`
|
||||||
|
|
||||||
|
**Contexts:** Global, Chat, Autocomplete, Settings, Confirmation, Tabs, Help, Transcript, HistorySearch, Task, ThemePicker, Attachments, Footer, MessageSelector, DiffDialog, ModelPicker, Select, Plugin
|
||||||
|
|
||||||
|
**Fully utilizing:** `chat:newline` bound to Shift+Enter; external editor for complex prompts; chord bindings for workflows.
|
||||||
|
|
||||||
|
**Common gaps:** File never created; `chat:newline` not bound (most common friction); vim mode not enabled for vim users.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 15. Skills
|
||||||
|
|
||||||
|
**What it is:** Custom slash commands with full tool access.
|
||||||
|
|
||||||
|
**Config locations:** `~/.claude/skills/<name>/SKILL.md` (user); `.claude/skills/<name>/SKILL.md` (project); `.claude/commands/<name>.md` (legacy)
|
||||||
|
|
||||||
|
**Key frontmatter:** `name`, `description`, `argument-hint`, `allowed-tools`, `model`, `effort`, `context` (fork), `agent`, `hooks`, `paths`, `disable-model-invocation`, `user-invocable`, `shell`
|
||||||
|
|
||||||
|
**String substitutions:** `$ARGUMENTS`, `$ARGUMENTS[N]`, `$N`, `${CLAUDE_SESSION_ID}`, `${CLAUDE_SKILL_DIR}`
|
||||||
|
|
||||||
|
**Dynamic context:** `` !`command` `` executes shell command and inlines output.
|
||||||
|
|
||||||
|
**Bundled skills:** `/batch`, `/claude-api`, `/debug`, `/loop`, `/simplify`
|
||||||
|
|
||||||
|
**Fully utilizing:** Custom deploy/review workflows; `disable-model-invocation: true` on side-effect skills; `context: fork` for isolated research; `!`git diff HEAD`` for dynamic context; `argument-hint` for UX.
|
||||||
|
|
||||||
|
**Common gaps:** No custom skills; skills missing `description` (never auto-invoked); no `!`command`` dynamic context; bundled skills not used.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 16. Agents (Subagents)
|
||||||
|
|
||||||
|
**What it is:** Named AI workers with scoped tools, models, and permissions.
|
||||||
|
|
||||||
|
**Config locations:** `.claude/agents/<name>.md` (project); `~/.claude/agents/<name>.md` (user); plugin `agents/`; managed `agents/`
|
||||||
|
|
||||||
|
**Key frontmatter:** `name`, `description`, `model`, `tools`, `disallowedTools`, `permissionMode`, `mcpServers`, `hooks`, `maxTurns`, `skills`, `initialPrompt`, `memory` ("user"/"none"), `effort`, `background`, `isolation` ("worktree"), `color`
|
||||||
|
|
||||||
|
**Built-in agents:** `Explore` (read-only, Haiku), `Plan` (read-only), `general-purpose` (all tools), `Claude Code Guide` (Haiku)
|
||||||
|
|
||||||
|
**Fully utilizing:** Domain agents (security-reviewer, test-writer); restricted tool sets; Haiku for scanning, Opus for analysis; `isolation: worktree` for parallel work; `memory: "user"` for persistent learning; `maxTurns` guard.
|
||||||
|
|
||||||
|
**Common gaps:** No custom agents; no tool restrictions; no model optimization; no persistent memory; no worktree isolation; missing `description` (never auto-delegated).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 17. Plugins
|
||||||
|
|
||||||
|
**What it is:** Namespaced bundles of skills + agents + hooks + MCP + tools.
|
||||||
|
|
||||||
|
**Config location:** `.claude-plugin/plugin.json` (manifest); enabled via `enabledPlugins` in settings.json
|
||||||
|
|
||||||
|
**Key plugin.json fields:** `name`, `description`, `version`, `author`, `homepage`, `repository`, `license`
|
||||||
|
|
||||||
|
**Structure:** `skills/`, `agents/`, `hooks/hooks.json`, `.mcp.json`, `.lsp.json`, `bin/`, `settings.json`
|
||||||
|
|
||||||
|
**Enabling:**
|
||||||
|
```json
|
||||||
|
{"enabledPlugins": {"plugin-name@marketplace": true}}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fully utilizing:** Team plugins in shared marketplace; org tool bundles (MCP + skills + agents); LSP plugins for all languages; `bin/` for custom CLI tools.
|
||||||
|
|
||||||
|
**Common gaps:** No plugins; `.claude/` configs that should be plugins (not shareable); no LSP plugins; no team marketplace configured.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 18. Output Styles
|
||||||
|
|
||||||
|
**What it is:** Named system prompt variants that change Claude's default behavior.
|
||||||
|
|
||||||
|
**Config locations:** `~/.claude/output-styles/*.md` (user); `.claude/output-styles/*.md` (project); `outputStyle` key in settings.json
|
||||||
|
|
||||||
|
**Built-in styles:** `Default` (SE assistant), `Explanatory` (educational Insights blocks), `Learning` (collaborative, TODO(human) markers)
|
||||||
|
|
||||||
|
**Custom format:**
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
name: My Style
|
||||||
|
description: What this does
|
||||||
|
keep-coding-instructions: false
|
||||||
|
---
|
||||||
|
Instructions here...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key distinction:** Output styles replace the system prompt; CLAUDE.md adds a user message. Use output styles when you need stronger enforcement.
|
||||||
|
|
||||||
|
**Fully utilizing:** Custom style for documentation/analysis work; `Explanatory` for onboarding; project styles for specialized domains.
|
||||||
|
|
||||||
|
**Common gaps:** Never changed from Default; not knowing styles modify the system prompt (vs CLAUDE.md); no custom styles for specialized workflows.
|
||||||
|
|
@ -0,0 +1,93 @@
|
||||||
|
# Configuration Best Practices
|
||||||
|
|
||||||
|
> Concrete, actionable patterns. No generic advice.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CLAUDE.md
|
||||||
|
|
||||||
|
1. **Keep under 200 lines.** Claude's adherence drops on longer files. If the file exceeds 200 lines, extract sections with `@import`.
|
||||||
|
2. **Use `@import` for specs/docs.** `@path/to/spec.md` inlines the file at session start. Max 5 hops. Keeps the main file scannable.
|
||||||
|
3. **Use HTML comments for maintainer notes.** `<!-- Updated 2026-01-01: reason -->` is stripped before context injection — zero token cost.
|
||||||
|
4. **Put personal dev notes in `CLAUDE.local.md`**, not `CLAUDE.md`. Add `CLAUDE.local.md` to `.gitignore`. Team members' sandbox URLs should never appear in git.
|
||||||
|
5. **Write `~/.claude/CLAUDE.md` for preferences that apply everywhere.** Communication style, preferred tools, output format — not project-specific config.
|
||||||
|
6. **Use clear markdown headers** (`##` sections). Claude uses the structure to navigate; unstructured text is harder to follow selectively.
|
||||||
|
7. **Avoid contradicting project settings.json.** CLAUDE.md is a user message; settings.json permissions take precedence. Don't document permissions in CLAUDE.md — put them in settings.json where they're enforced.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## settings.json
|
||||||
|
|
||||||
|
1. **Add `$schema` to every settings.json.** `"$schema": "https://json.schemastore.org/claude-code-settings.json"` enables autocomplete in VS Code and Cursor. Takes 2 seconds, saves every future edit.
|
||||||
|
2. **Use all three scopes: user, project, local.** User (`~/.claude/settings.json`) for personal defaults. Project (`.claude/settings.json`) for team agreements. Local (`.claude/settings.local.json`) for personal project overrides.
|
||||||
|
3. **Put env vars in `settings.json` `env` block, not shell.** `{"env": {"NODE_ENV": "development"}}` ensures they're always set in Claude sessions, regardless of how the shell was launched.
|
||||||
|
4. **Set `defaultMode: "acceptEdits"` for active development projects.** Eliminates per-file permission prompts. Use `"plan"` for infrastructure repos where you want read-only analysis by default.
|
||||||
|
5. **Deny `.env` and `secrets/` explicitly.** `{"permissions": {"deny": ["Read(./.env)", "Read(./secrets/**)"]}}` — Claude cannot read these even if it reasons it should.
|
||||||
|
6. **Pre-allow repetitive safe commands.** `{"permissions": {"allow": ["Bash(npm run *)", "Bash(git status)", "Bash(git log *)"]}}` — eliminates constant prompts for read-only git operations.
|
||||||
|
7. **Configure `attribution` for org identity.** `{"attribution": {"commit": "Generated with Claude Code [bot]", "pr": ""}}` — keeps commit history clean and attributable.
|
||||||
|
8. **Set `effortLevel` per project, not per prompt.** `{"effortLevel": "high"}` for complex codebases, `"low"` for simple scripts. Avoids forgetting to set it each session.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hooks
|
||||||
|
|
||||||
|
1. **Add a `Stop` hook before anything else.** `Stop` hook on session end is the most useful starting point — session summary, auto-commit prompt, notification. Many users have zero hooks; one Stop hook delivers immediate value.
|
||||||
|
2. **Use `PostToolUse` on Write/Edit for auto-formatting.** `{"PostToolUse": [{"matcher": "Write|Edit", "hooks": [{"type": "command", "command": "prettier --write ${CLAUDE_TOOL_OUTPUT_PATH}"}]}]}` — eliminates manual format steps.
|
||||||
|
3. **Use `PreToolUse` on Bash for security.** Validate shell commands before execution. Exit code 2 blocks the tool call with an error message shown to Claude.
|
||||||
|
4. **Use `SessionStart` for context injection.** Inject git branch name, active Linear issue, or CI status into context at session start. Cheaper than asking Claude to fetch it.
|
||||||
|
5. **Add `Notification` hook for desktop alerts.** When Claude needs input (permission prompt, idle), get a system notification. Without this, long sessions require constant manual checking.
|
||||||
|
6. **Match MCP tools precisely.** `"mcp__.*__write.*"` matches all write tools from all MCP servers. `"mcp__filesystem__.*"` matches all filesystem tools. Use patterns, not exact names.
|
||||||
|
7. **Keep hook scripts fast (< 2s for PreToolUse).** Blocking hooks run synchronously. Slow PreToolUse hooks add latency to every tool call. Use async for logging/reporting.
|
||||||
|
8. **Use `${CLAUDE_PLUGIN_ROOT}` for paths in plugin hooks.** Absolute paths break when plugins move. `${CLAUDE_PLUGIN_ROOT}/hooks/scripts/check.sh` is portable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rules (.claude/rules/)
|
||||||
|
|
||||||
|
1. **Use `paths:` frontmatter on every rules file.** Rules without `paths:` load for every file. A TypeScript rules file with `paths: ["**/*.ts", "**/*.tsx"]` only loads for TypeScript work — zero overhead otherwise.
|
||||||
|
2. **One rules file per domain or language.** `typescript.md`, `python.md`, `testing.md`, `migrations.md` — not one big `coding-rules.md`. Granular files = granular loading.
|
||||||
|
3. **Put project rules in `.claude/rules/`, user rules in `~/.claude/rules/`.** Project rules are team-specific and committed; user rules are personal preferences across all projects.
|
||||||
|
4. **Symlink shared rule sets.** If multiple projects share rules, symlink: `ln -s ../../shared/rules/security.md .claude/rules/security.md`. Claude follows symlinks.
|
||||||
|
5. **Test path globs before committing.** `paths: ["src/**"]` doesn't match `./src/file.ts` — leading `./` matters. Test with the actual file paths Claude will encounter.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MCP
|
||||||
|
|
||||||
|
1. **Commit `.mcp.json` to git.** Team-shared MCP servers belong in `.mcp.json` at project root, not in individual `~/.claude.json` files. One commit, everyone gets the servers.
|
||||||
|
2. **Set `enableAllProjectMcpServers: true` in project settings.json** for zero-friction team onboarding. New team members don't have to manually approve each server.
|
||||||
|
3. **Set trust levels explicitly.** `"trust": "workspace"` for project-specific servers; `"trust": "trusted"` only for servers you fully control. Default is untrusted (sandboxed).
|
||||||
|
4. **Use `@server:resource/path` for dynamic data.** `@github:repos/owner/repo/issues` pulls live data into context. More reliable than asking Claude to fetch and parse.
|
||||||
|
5. **Deny MCP tools you don't want Claude to invoke.** `{"permissions": {"deny": ["mcp__filesystem__write_file"]}}` — even with a server connected, specific tools can be blocked.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Skills
|
||||||
|
|
||||||
|
1. **Add `description` to every skill.** Without `description`, Claude never auto-invokes the skill. The description is the trigger — be specific about when to use it.
|
||||||
|
2. **Set `disable-model-invocation: true` on deploy/delete skills.** Side-effect commands should only run when the user explicitly types `/deploy`, not when Claude decides it's appropriate.
|
||||||
|
3. **Use `!`git diff HEAD`` for dynamic context.** Dynamic shell execution inlines current state at invocation time. Better than hardcoded file references that go stale.
|
||||||
|
4. **Use `context: fork` with a custom agent for isolated research.** Forks run in a separate context (and optionally a separate model), keeping research overhead out of the main session.
|
||||||
|
5. **Add `argument-hint` to all parameterized skills.** `argument-hint: "[issue-number]"` shows in the `/` menu autocomplete. Without it, users forget the expected argument format.
|
||||||
|
6. **Store large reference docs in skill subdirectory, not SKILL.md.** SKILL.md describes *when to load* each reference file. The references themselves stay separate so they're only loaded when needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agents
|
||||||
|
|
||||||
|
1. **Restrict tools to the minimum needed.** A read-only research agent should have `tools: ["Read", "Glob", "Grep"]`, not all tools. Scoped agents are safer and faster.
|
||||||
|
2. **Match model to task complexity.** Haiku for file discovery and scanning; Sonnet for implementation; Opus for architecture and analysis. Don't use Opus for tasks that are primarily file reading.
|
||||||
|
3. **Set `maxTurns` on autonomous agents.** Without a turn limit, a misconfigured agent can run indefinitely. `maxTurns: 20` is a reasonable default for most tasks.
|
||||||
|
4. **Write `description` as a trigger condition, not a title.** "Use when analyzing TypeScript files for type errors" beats "TypeScript analyzer". Claude uses the description to decide delegation.
|
||||||
|
5. **Use `isolation: worktree` for agents that make file changes.** Agents running in their own worktree can't interfere with the main session. Changes are reviewable before merge.
|
||||||
|
6. **Enable `memory: "user"` for domain-expert agents.** A security-reviewer agent that accumulates codebase knowledge across sessions gets better over time. Add `memory: "user"` to the frontmatter.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Permissions
|
||||||
|
|
||||||
|
1. **Start with `defaultMode: "acceptEdits"`** for most projects. Then add specific `deny` rules for sensitive paths. More productive than prompting for every file write.
|
||||||
|
2. **Block secrets files by pattern, not by name.** `"deny": ["Read(./.env*)", "Read(./**/secrets/**)", "Read(./**/*.pem)"]` — catch all variants, not just `.env`.
|
||||||
|
3. **Use `additionalDirectories` for cross-repo work.** If Claude regularly reads `../shared-lib/`, add it: `{"additionalDirectories": ["../shared-lib/"]}`. Otherwise Claude can't access it without prompts.
|
||||||
|
4. **Configure `autoMode.environment` before using auto mode.** Without it, Claude's background safety classifier triggers false positives on your org's internal tool names and domains.
|
||||||
|
5. **Add `Agent()` deny rules for sensitive agents.** `{"deny": ["Agent(general-purpose)"]}` prevents the most powerful agent from running without explicit permission.
|
||||||
60
plugins/config-audit/knowledge/feature-evolution.md
Normal file
60
plugins/config-audit/knowledge/feature-evolution.md
Normal file
|
|
@ -0,0 +1,60 @@
|
||||||
|
# Claude Code Feature Evolution
|
||||||
|
|
||||||
|
> Timeline of major features, most recent first. Covers features with configuration impact.
|
||||||
|
> Source: Official Claude Code documentation, verified 2026-04-03.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2026
|
||||||
|
|
||||||
|
| Approx. Date | Feature | Config Impact |
|
||||||
|
|-------------|---------|---------------|
|
||||||
|
| Q1 2026 | **Agent Teams (experimental)** | Enable via `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` or env in settings.json. Configure display mode via `~/.claude.json` `teammateMode`. Hooks: `TeammateIdle`, `TaskCreated`, `TaskCompleted`. |
|
||||||
|
| Q1 2026 | **Elicitation events** | `Elicitation` and `ElicitationResult` hook events added. MCP servers can request user input; hooks control and log these requests. |
|
||||||
|
| Q1 2026 | **`SubagentStart` / `SubagentStop` hooks** | Added hook events for subagent lifecycle. `SubagentStop` is blocking — exit code 2 acts as a quality gate. |
|
||||||
|
| Q1 2026 | **`ConfigChange` hook event** | Fires when any config file changes on disk. Matcher: `user_settings`, `project_settings`, `local_settings`, `policy_settings`, `skills`. |
|
||||||
|
| Q1 2026 | **`InstructionsLoaded` hook event** | Fires when CLAUDE.md/.claude/rules files load. Useful for debugging instruction loading order and content. |
|
||||||
|
| Q1 2026 | **`StopFailure` / `PostToolUseFailure` hooks** | Error-path hooks added for better error observability and retry logic. |
|
||||||
|
| Q1 2026 | **`WorktreeCreate` / `WorktreeRemove` hooks** | `WorktreeCreate` is blocking; hook can return custom worktree path to replace Claude's default git worktree logic. Enables non-git VCS support. |
|
||||||
|
| Q1 2026 | **`PermissionDenied` hook** | Info-only event when auto mode denies a tool. Useful for logging and auditing denied operations. |
|
||||||
|
| Q1 2026 | **`SessionEnd` hook** | Fires on session termination. Matcher: `clear`, `resume`, `logout`, `prompt_input_exit`, `other`. |
|
||||||
|
| Q1 2026 | **HTTP hook type** | `type: "http"` hook handler posts to HTTP endpoints. `allowedHttpHookUrls` and `httpHookAllowedEnvVars` settings for security controls. |
|
||||||
|
| Q1 2026 | **Agent-type hooks** | `type: "agent"` hook handler — full subagent with tools for complex validation. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2025
|
||||||
|
|
||||||
|
| Approx. Date | Feature | Config Impact |
|
||||||
|
|-------------|---------|---------------|
|
||||||
|
| Late 2025 | **Plugins system** | Namespaced skill/agent/hook/MCP bundles. `enabledPlugins` in settings.json. Plugin marketplace support. `--plugin-dir` for development. `/reload-plugins` command. `bin/` for CLI tools. |
|
||||||
|
| Late 2025 | **LSP plugins** | `.lsp.json` at plugin root provides real-time code intelligence. Official LSP plugins for TypeScript, Python, Rust, etc. |
|
||||||
|
| Late 2025 | **Output styles** | `outputStyle` setting; `~/.claude/output-styles/*.md` and `.claude/output-styles/*.md`. System prompt modification (stronger than CLAUDE.md). Built-in: Default, Explanatory, Learning. |
|
||||||
|
| Late 2025 | **Status line** | `statusLine` key in settings.json. Script receives stdin JSON with cost, context window %, model, worktree, session info. `/statusline` command for natural language config. |
|
||||||
|
| Late 2025 | **Skills system (v2)** | Major expansion of frontmatter fields: `context: fork`, `disable-model-invocation`, `user-invocable`, `paths`, `hooks`, `shell`. `!`command`` dynamic context. `$ARGUMENTS[N]` indexing. |
|
||||||
|
| Late 2025 | **Subagent isolation: worktree** | `isolation: worktree` in agent frontmatter. Each invocation gets own git worktree. Auto-cleaned on completion. |
|
||||||
|
| Late 2025 | **Subagent persistent memory** | `memory: "user"` in agent frontmatter. Accumulates knowledge to `~/.claude/agent-memory/`. |
|
||||||
|
| Late 2025 | **Subagent preloaded skills** | `skills:` array in agent frontmatter. Full skill content injected at agent startup (vs. description-only in regular sessions). |
|
||||||
|
| Mid 2025 | **Worktrees** | `claude --worktree <name>` CLI flag. `.worktreeinclude` for gitignored file propagation. `worktree.symlinkDirectories` and `worktree.sparsePaths` settings. |
|
||||||
|
| Mid 2025 | **MCP integration** | `.mcp.json` project-level config. `~/.claude.json` `mcpServers`. Three server types: stdio, http, sse. Resources via `@server:path`. Prompts as slash commands. |
|
||||||
|
| Mid 2025 | **Auto-memory** | `~/.claude/projects/<project>/memory/MEMORY.md`. `autoMemoryEnabled` setting. `autoMemoryDirectory` for custom path. Topic files loaded on demand. |
|
||||||
|
| Mid 2025 | **Managed settings** | `managed-settings.json`, `managed-settings.d/*.json`. Org-wide config at highest precedence. Managed-only keys for enterprise lockdown. |
|
||||||
|
| Mid 2025 | **`PreCompact`/`PostCompact` hooks** | Hooks for context compaction lifecycle. Matcher: `manual`, `auto`. |
|
||||||
|
| Early 2025 | **Hooks system (v1)** | Initial hooks in settings.json. Events: `SessionStart`, `UserPromptSubmit`, `PreToolUse`, `PermissionRequest`, `PostToolUse`, `Stop`, `Notification`, `CwdChanged`, `FileChanged`, `PreCompact`, `PostCompact`. `command` and `prompt` handler types. |
|
||||||
|
| Early 2025 | **`.claude/rules/` directory** | Path-specific rules with `paths:` frontmatter. Lazy loading — only loads when Claude works on matching files. |
|
||||||
|
| Early 2025 | **Keybindings** | `~/.claude/keybindings.json`. JSON Schema available. Chord support. Vim mode. 20+ contexts and 40+ bindable actions. |
|
||||||
|
| Early 2025 | **`CLAUDE.local.md`** | Project-local personal companion file. Gitignored. Appended after CLAUDE.md. |
|
||||||
|
| Early 2025 | **Extended thinking** | `alwaysThinkingEnabled` setting. `effortLevel` (low/medium/high/max). `MAX_THINKING_TOKENS` env var. |
|
||||||
|
| Early 2025 | **Auto mode** | `autoMode` object in settings.json (user/local only). `environment`, `allow`, `soft_deny` arrays. `disableAutoMode` setting. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2024
|
||||||
|
|
||||||
|
| Approx. Date | Feature | Config Impact |
|
||||||
|
|-------------|---------|---------------|
|
||||||
|
| Late 2024 | **Subagents (v1)** | `.claude/agents/<name>.md`. `~/.claude/agents/`. Frontmatter: `model`, `tools`, `disallowedTools`, `permissionMode`, `color`, `maxTurns`. |
|
||||||
|
| Late 2024 | **Skills system (v1)** | `~/.claude/skills/<name>/SKILL.md`. `.claude/skills/<name>/SKILL.md`. Legacy `.claude/commands/<name>.md` also supported. Basic frontmatter. |
|
||||||
|
| Mid 2024 | **`@import` in CLAUDE.md** | `@path/to/file` syntax for modular CLAUDE.md. Max 5 hops. HTML comment stripping. |
|
||||||
|
| Mid 2024 | **settings.json** | `.claude/settings.json` (project), `~/.claude/settings.json` (user), `.claude/settings.local.json` (local). Permissions, env, hooks. |
|
||||||
|
| Early 2024 | **CLAUDE.md** | Initial project-level instructions. User-level `~/.claude/CLAUDE.md`. Lazy loading of subdirectory files. Directory walk. |
|
||||||
207
plugins/config-audit/knowledge/gap-closure-templates.md
Normal file
207
plugins/config-audit/knowledge/gap-closure-templates.md
Normal file
|
|
@ -0,0 +1,207 @@
|
||||||
|
# Gap Closure Templates
|
||||||
|
|
||||||
|
Config-specific templates for closing feature gaps. Each template targets specific gap IDs, with effort estimate and expected utilization gain.
|
||||||
|
|
||||||
|
## CLAUDE.md Optimization
|
||||||
|
|
||||||
|
### Modular CLAUDE.md with @imports
|
||||||
|
**Closes:** t2_2 (CLAUDE.md not modular)
|
||||||
|
**Effort:** Low (15 min)
|
||||||
|
**Gain:** +5% utilization
|
||||||
|
|
||||||
|
Split large CLAUDE.md into focused modules:
|
||||||
|
1. Create `.claude/rules/` directory
|
||||||
|
2. Move topic-specific sections to individual `.md` files
|
||||||
|
3. Use `@.claude/rules/topic.md` imports in CLAUDE.md
|
||||||
|
|
||||||
|
### Path-Scoped Rules
|
||||||
|
**Closes:** t2_3 (No path-scoped rules)
|
||||||
|
**Effort:** Low (10 min)
|
||||||
|
**Gain:** +5% utilization
|
||||||
|
|
||||||
|
Add context-specific rules that only apply to matching files:
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
paths: src/**/*.ts
|
||||||
|
---
|
||||||
|
# TypeScript Rules
|
||||||
|
Use strict TypeScript. No `any` types.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Hook Automation
|
||||||
|
|
||||||
|
### Multi-Event Hook Setup
|
||||||
|
**Closes:** t1_3 (No hooks), t2_5 (Low hook diversity)
|
||||||
|
**Effort:** Medium (30 min)
|
||||||
|
**Gain:** +12% utilization
|
||||||
|
|
||||||
|
Configure hooks across 3+ events:
|
||||||
|
1. `PreToolUse` — security checks on Bash/Write
|
||||||
|
2. `Stop` — session summaries, state reminders
|
||||||
|
3. `SessionStart` — load context, check state
|
||||||
|
|
||||||
|
### Hooks in settings.json
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Bash",
|
||||||
|
"hooks": [{"type": "command", "command": "echo ok", "timeout": 5000}]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Stop": [
|
||||||
|
{
|
||||||
|
"hooks": [{"type": "prompt", "prompt": "Summarize session progress."}]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## MCP Integration
|
||||||
|
|
||||||
|
### Basic MCP Setup
|
||||||
|
**Closes:** t1_5 (No MCP), t4_1 (No project .mcp.json in git)
|
||||||
|
**Effort:** Low (15 min)
|
||||||
|
**Gain:** +10% utilization
|
||||||
|
|
||||||
|
Create `.mcp.json` at project root:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"memory": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-memory"],
|
||||||
|
"trust": "workspace"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Commit to git for team sharing.
|
||||||
|
|
||||||
|
## Skill & Command Development
|
||||||
|
|
||||||
|
### Custom Skills
|
||||||
|
**Closes:** t1_4 (No custom skills/commands)
|
||||||
|
**Effort:** Medium (30 min)
|
||||||
|
**Gain:** +7% utilization
|
||||||
|
|
||||||
|
Create project-specific skills in `.claude/commands/`:
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
name: project:build
|
||||||
|
description: Build and test the project
|
||||||
|
allowed-tools: Bash, Read
|
||||||
|
model: sonnet
|
||||||
|
---
|
||||||
|
Run: `npm run build && npm test`
|
||||||
|
Report results.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Advanced Skill Frontmatter
|
||||||
|
**Closes:** t3_5 (No advanced skill frontmatter), t3_7 (No dynamic skill context)
|
||||||
|
**Effort:** Low (15 min)
|
||||||
|
**Gain:** +5% utilization
|
||||||
|
|
||||||
|
Add dynamic context and fork mode:
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
name: project:deploy
|
||||||
|
context: fork
|
||||||
|
argument-hint: "[environment]"
|
||||||
|
---
|
||||||
|
Current branch: !`git branch --show-current`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Agent Architecture
|
||||||
|
|
||||||
|
### Custom Subagents
|
||||||
|
**Closes:** t2_6 (No custom subagents)
|
||||||
|
**Effort:** Medium (45 min)
|
||||||
|
**Gain:** +5% utilization
|
||||||
|
|
||||||
|
Create specialized agents in `.claude/agents/`:
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
name: reviewer
|
||||||
|
description: |
|
||||||
|
Code review agent for pull requests.
|
||||||
|
model: sonnet
|
||||||
|
color: blue
|
||||||
|
tools: ["Read", "Glob", "Grep"]
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
### Subagent Isolation
|
||||||
|
**Closes:** t3_6 (No subagent isolation)
|
||||||
|
**Effort:** Low (5 min)
|
||||||
|
**Gain:** +2% utilization
|
||||||
|
|
||||||
|
Add `isolation: worktree` to agents that modify files:
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
isolation: worktree
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Plugin Architecture
|
||||||
|
|
||||||
|
### Custom Plugin
|
||||||
|
**Closes:** t4_2 (No custom plugin)
|
||||||
|
**Effort:** High (2-4 hours)
|
||||||
|
**Gain:** +2% utilization
|
||||||
|
|
||||||
|
Package reusable skills, agents, and hooks:
|
||||||
|
```
|
||||||
|
.claude-plugin/
|
||||||
|
├── plugin.json
|
||||||
|
├── commands/
|
||||||
|
├── agents/
|
||||||
|
└── hooks/
|
||||||
|
└── hooks.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Settings Optimization
|
||||||
|
|
||||||
|
### Multi-Scope Settings
|
||||||
|
**Closes:** t2_1 (Settings only at one scope)
|
||||||
|
**Effort:** Low (10 min)
|
||||||
|
**Gain:** +5% utilization
|
||||||
|
|
||||||
|
Use all 3 settings scopes:
|
||||||
|
- `~/.claude/settings.json` — global defaults
|
||||||
|
- `.claude/settings.json` — project (committed)
|
||||||
|
- `.claude/settings.local.json` — personal overrides (gitignored)
|
||||||
|
|
||||||
|
### Model Configuration
|
||||||
|
**Closes:** t2_7 (No model configuration)
|
||||||
|
**Effort:** Low (5 min)
|
||||||
|
**Gain:** +5% utilization
|
||||||
|
|
||||||
|
Set model preferences in settings:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"model": "sonnet",
|
||||||
|
"modelOverrides": {
|
||||||
|
"planMode": "opus"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Impact Summary
|
||||||
|
|
||||||
|
| Template | Gaps Closed | Effort | Gain |
|
||||||
|
|----------|-------------|--------|------|
|
||||||
|
| Modular CLAUDE.md | t2_2 | Low | +5% |
|
||||||
|
| Path-Scoped Rules | t2_3 | Low | +5% |
|
||||||
|
| Multi-Event Hooks | t1_3, t2_5 | Medium | +12% |
|
||||||
|
| MCP Setup | t1_5, t4_1 | Low | +10% |
|
||||||
|
| Custom Skills | t1_4 | Medium | +7% |
|
||||||
|
| Advanced Frontmatter | t3_5, t3_7 | Low | +5% |
|
||||||
|
| Custom Subagents | t2_6 | Medium | +5% |
|
||||||
|
| Subagent Isolation | t3_6 | Low | +2% |
|
||||||
|
| Custom Plugin | t4_2 | High | +2% |
|
||||||
|
| Multi-Scope Settings | t2_1 | Low | +5% |
|
||||||
|
| Model Configuration | t2_7 | Low | +5% |
|
||||||
117
plugins/config-audit/knowledge/hook-events-reference.md
Normal file
117
plugins/config-audit/knowledge/hook-events-reference.md
Normal file
|
|
@ -0,0 +1,117 @@
|
||||||
|
# Hook Events Reference
|
||||||
|
|
||||||
|
> All 26 hook events as of April 2026. Source: code.claude.com/docs/en/hooks.md
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Event Table
|
||||||
|
|
||||||
|
| Event | Trigger | Blocking? | Matcher Support | Common Use Cases |
|
||||||
|
|-------|---------|-----------|-----------------|------------------|
|
||||||
|
| `SessionStart` | Session begins or resumes | No | `startup`, `resume`, `clear`, `compact` | Inject git branch/env into context; show session state; load external context |
|
||||||
|
| `InstructionsLoaded` | CLAUDE.md / .claude/rules files are loaded | No | `session_start`, `nested_traversal`, `path_glob_match`, `include`, `compact` | Debug which instruction files loaded; log instruction sources; validate rule sets |
|
||||||
|
| `UserPromptSubmit` | User submits a prompt | Yes | No matcher | Validate prompt length; inject context; block disallowed prompt patterns; add mandatory context |
|
||||||
|
| `PreToolUse` | Before any tool executes | Yes | Tool name (e.g., `Bash`, `Write`, `mcp__.*`) | Security validation; confirm destructive ops; log tool calls; rate limiting |
|
||||||
|
| `PermissionRequest` | Permission dialog appears | Yes | Tool name | Auto-approve known-safe patterns; add approval context; integrate with approval workflows |
|
||||||
|
| `PermissionDenied` | Auto mode denies a tool call | No (info only) | Tool name | Log denied operations; alert on unexpected denials; track permission patterns |
|
||||||
|
| `PostToolUse` | Tool completes successfully | No | Tool name | Auto-format after Write/Edit; run linting; update docs; log completions |
|
||||||
|
| `PostToolUseFailure` | Tool ends in error | No | Tool name | Log failures; send alerts; trigger retry logic; update error tracking |
|
||||||
|
| `SubagentStart` | Subagent is spawned | No | Agent type (name) | Log agent invocations; inject agent-specific context; record spawn times |
|
||||||
|
| `SubagentStop` | Subagent finishes | Yes | Agent type (name) | Quality gates (exit 2 to reject); validate agent output; run post-agent checks |
|
||||||
|
| `TaskCreated` | A task is created in the task list | Yes | No matcher | Validate task format; enforce naming conventions; block disallowed task types |
|
||||||
|
| `TaskCompleted` | A task is marked complete | Yes | No matcher | Verify completion criteria; run acceptance checks; require sign-off |
|
||||||
|
| `Stop` | Claude finishes a response turn | Yes | No matcher | Session summaries; commit prompts; send desktop notifications; log turn metadata |
|
||||||
|
| `StopFailure` | Turn ends in an API error | No | Error type | Alert on API errors; retry logic; log error context |
|
||||||
|
| `TeammateIdle` | An agent team member has no tasks | Yes | No matcher | Assign next task (exit 2 to keep working); log team status; rebalance work |
|
||||||
|
| `Notification` | A notification is sent (permission prompt, idle, auth) | No | `permission_prompt`, `idle_prompt`, `auth_success`, `elicitation_dialog` | Desktop notifications; Slack/webhook alerts; mobile push; audio cues |
|
||||||
|
| `ConfigChange` | A config file changes on disk | Yes | `user_settings`, `project_settings`, `local_settings`, `policy_settings`, `skills` | Validate config changes; block invalid edits; reload dependent processes |
|
||||||
|
| `CwdChanged` | Working directory changes | No | No matcher | Inject new directory context; update env vars via `$CLAUDE_ENV_FILE`; log navigation |
|
||||||
|
| `FileChanged` | A watched file changes | No | Filename pattern | Auto-reload when config changes; trigger builds on source change; sync state |
|
||||||
|
| `WorktreeCreate` | A git worktree is being created | Yes (path return) | No matcher | Custom worktree path via stdout; non-git VCS support; worktree naming conventions |
|
||||||
|
| `WorktreeRemove` | A git worktree is removed | No | No matcher | Cleanup resources; log worktree lifecycle; update team state |
|
||||||
|
| `PreCompact` | Before context compaction | No | `manual`, `auto` | Save current state; checkpoint important context; log pre-compact state |
|
||||||
|
| `PostCompact` | After context compaction | No | `manual`, `auto` | Reinject critical context; validate compaction; log post-compact state |
|
||||||
|
| `Elicitation` | An MCP server requests user input | Yes | MCP server name | Control which servers can request input; log elicitations; pre-fill responses |
|
||||||
|
| `ElicitationResult` | User responds to MCP elicitation | Yes | MCP server name | Validate responses; log user input; transform before sending to MCP |
|
||||||
|
| `SessionEnd` | Session terminates | No | `clear`, `resume`, `logout`, `prompt_input_exit`, `other` | Final session summary; save state; cleanup temp files; send end-of-session report |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hook Handler Types
|
||||||
|
|
||||||
|
| Type | Description | Use When |
|
||||||
|
|------|-------------|----------|
|
||||||
|
| `command` | Shell command (bash/powershell) | Fast scripts, file checks, security validation |
|
||||||
|
| `http` | HTTP POST to endpoint | Remote logging, webhooks, approval systems |
|
||||||
|
| `prompt` | LLM evaluation (yes/no decision) | Semantic validation that needs language understanding |
|
||||||
|
| `agent` | Full subagent with tools | Complex validation requiring file reads or multi-step logic |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handler Configuration Fields
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|-------|------|-------------|
|
||||||
|
| `type` | string | `command`, `http`, `prompt`, `agent` |
|
||||||
|
| `command` | string | Shell command (type: command only) |
|
||||||
|
| `url` | string | HTTP endpoint (type: http only) |
|
||||||
|
| `prompt` | string | LLM prompt (type: prompt only) |
|
||||||
|
| `if` | string | Conditional expression — only fires when true (e.g., `Bash(rm *)`) |
|
||||||
|
| `timeout` | number | Milliseconds before hook is killed (default: varies) |
|
||||||
|
| `statusMessage` | string | Message shown in UI while hook runs |
|
||||||
|
| `async` | bool | `true` = fire and forget, don't wait for result |
|
||||||
|
| `shell` | string | `"bash"` or `"powershell"` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exit Code Semantics
|
||||||
|
|
||||||
|
| Exit Code | Blocking Event | Non-Blocking Event |
|
||||||
|
|-----------|---------------|---------------------|
|
||||||
|
| `0` | Proceed; JSON on stdout is parsed | Success; JSON on stdout parsed |
|
||||||
|
| `2` | **Block** — stderr shown to Claude as error | Non-blocking; treated as informational |
|
||||||
|
| other | Non-blocking; stderr in verbose log only | Non-blocking; stderr in verbose log only |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Blocking Event Output Fields
|
||||||
|
|
||||||
|
**PreToolUse** (exit 0):
|
||||||
|
- `permissionDecision`: `"allow"` / `"deny"` / `"ask"` / `"defer"`
|
||||||
|
- `updatedInput`: modified tool input
|
||||||
|
- `additionalContext`: string appended to Claude's context
|
||||||
|
|
||||||
|
**PermissionRequest** (exit 0):
|
||||||
|
- `decision.behavior`: `"allow"` / `"deny"`
|
||||||
|
- `updatedInput`: modified input
|
||||||
|
- `updatedPermissions`: modified permission set
|
||||||
|
|
||||||
|
**WorktreeCreate** (exit 0):
|
||||||
|
- stdout: path string OR `hookSpecificOutput.worktreePath`
|
||||||
|
|
||||||
|
**SessionStart** (exit 0):
|
||||||
|
- `additionalContext`: string injected into context
|
||||||
|
- Or: write env vars to `$CLAUDE_ENV_FILE`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment Variables Available in Hooks
|
||||||
|
|
||||||
|
| Variable | Available In | Description |
|
||||||
|
|----------|-------------|-------------|
|
||||||
|
| `$CLAUDE_PROJECT_DIR` | All hooks | Absolute path to project root |
|
||||||
|
| `${CLAUDE_PLUGIN_ROOT}` | Plugin hooks | Plugin installation directory |
|
||||||
|
| `${CLAUDE_PLUGIN_DATA}` | Plugin hooks | Plugin persistent data directory |
|
||||||
|
| `$CLAUDE_ENV_FILE` | SessionStart, CwdChanged, FileChanged | Path to write env var exports |
|
||||||
|
| `$CLAUDE_CODE_REMOTE` | All hooks | `"true"` when running in web sessions |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## MCP Tool Matcher Patterns
|
||||||
|
|
||||||
|
| Pattern | Matches |
|
||||||
|
|---------|---------|
|
||||||
|
| `mcp__memory__.*` | All tools from the `memory` server |
|
||||||
|
| `mcp__.*__write.*` | Any tool named `write*` from any server |
|
||||||
|
| `mcp__filesystem__read_file` | Specific tool on specific server |
|
||||||
|
| `mcp__.*` | All MCP tools from all servers |
|
||||||
209
plugins/config-audit/scanners/claude-md-linter.mjs
Normal file
209
plugins/config-audit/scanners/claude-md-linter.mjs
Normal file
|
|
@ -0,0 +1,209 @@
|
||||||
|
/**
|
||||||
|
* CML Scanner — CLAUDE.md Linter
|
||||||
|
* Validates structure, sections, length, @imports, frontmatter, and HTML comments.
|
||||||
|
* Finding IDs: CA-CML-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readTextFile } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult, resetCounter } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseFrontmatter, extractSections, findImports } from './lib/yaml-parser.mjs';
|
||||||
|
import { lineCount, truncate } from './lib/string-utils.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'CML';
|
||||||
|
const MAX_RECOMMENDED_LINES = 200;
|
||||||
|
const MAX_ABSOLUTE_LINES = 500;
|
||||||
|
|
||||||
|
/** Recommended sections for a project CLAUDE.md */
|
||||||
|
const RECOMMENDED_SECTIONS = [
|
||||||
|
{ pattern: /project|overview|description|what/i, label: 'Project overview' },
|
||||||
|
{ pattern: /command|workflow|how to|getting started|usage/i, label: 'Commands/Workflows' },
|
||||||
|
{ pattern: /architect|structure|directory|layout/i, label: 'Architecture' },
|
||||||
|
{ pattern: /convention|pattern|rule|style/i, label: 'Conventions/Patterns' },
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan all CLAUDE.md files discovered.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const claudeFiles = discovery.files.filter(f => f.type === 'claude-md');
|
||||||
|
|
||||||
|
if (claudeFiles.length === 0) {
|
||||||
|
return scannerResult(SCANNER, 'ok', [
|
||||||
|
finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'No CLAUDE.md found',
|
||||||
|
description: 'No CLAUDE.md files were discovered. This is the primary configuration surface for Claude Code.',
|
||||||
|
recommendation: 'Run `/init` to create a starter CLAUDE.md, or create one manually.',
|
||||||
|
autoFixable: false,
|
||||||
|
}),
|
||||||
|
], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
const findings = [];
|
||||||
|
let filesScanned = 0;
|
||||||
|
|
||||||
|
for (const file of claudeFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
filesScanned++;
|
||||||
|
|
||||||
|
const lines = lineCount(content);
|
||||||
|
const { frontmatter, body, bodyStartLine } = parseFrontmatter(content);
|
||||||
|
const sections = extractSections(body);
|
||||||
|
const imports = findImports(content);
|
||||||
|
|
||||||
|
// --- Length checks ---
|
||||||
|
if (lines > MAX_ABSOLUTE_LINES) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'CLAUDE.md exceeds 500 lines',
|
||||||
|
description: `${file.relPath} has ${lines} lines. Files over 500 lines significantly reduce Claude's adherence to instructions.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${lines} lines`,
|
||||||
|
recommendation: 'Split into @imports and .claude/rules/ files. Keep CLAUDE.md under 200 lines.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
} else if (lines > MAX_RECOMMENDED_LINES) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'CLAUDE.md exceeds recommended 200 lines',
|
||||||
|
description: `${file.relPath} has ${lines} lines. Best practice is under 200 lines for optimal adherence.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${lines} lines`,
|
||||||
|
recommendation: 'Consider using @imports or .claude/rules/ for detailed content.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Empty file ---
|
||||||
|
if (lines < 3) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'CLAUDE.md is nearly empty',
|
||||||
|
description: `${file.relPath} has only ${lines} lines.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add project overview, commands/workflows, and conventions.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
continue; // Skip further checks for empty files
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Section checks (only for project/user scope) ---
|
||||||
|
if (file.scope === 'project' || file.scope === 'user') {
|
||||||
|
const sectionHeadings = sections.map(s => s.heading);
|
||||||
|
const missingSections = [];
|
||||||
|
|
||||||
|
for (const rec of RECOMMENDED_SECTIONS) {
|
||||||
|
const found = sectionHeadings.some(h => rec.pattern.test(h));
|
||||||
|
if (!found) {
|
||||||
|
missingSections.push(rec.label);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (missingSections.length > 0) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Missing recommended sections',
|
||||||
|
description: `${file.relPath} is missing: ${missingSections.join(', ')}`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `Present sections: ${sectionHeadings.slice(0, 5).join(', ') || '(none)'}`,
|
||||||
|
recommendation: `Add sections for: ${missingSections.join(', ')}`,
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- No headings at all ---
|
||||||
|
if (sections.length === 0 && lines > 10) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'CLAUDE.md has no markdown headings',
|
||||||
|
description: `${file.relPath} has ${lines} lines but no ## headings. Structured content with headers improves Claude's ability to find and follow instructions.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add markdown headings (##) to organize content into scannable sections.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- @import checks ---
|
||||||
|
for (const imp of imports) {
|
||||||
|
// Check for @imports referencing non-existent files
|
||||||
|
// (Full resolution is in import-resolver scanner, here we just flag obvious issues)
|
||||||
|
if (imp.path.includes('..') && imp.path.split('..').length > 3) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: '@import with deep relative path',
|
||||||
|
description: `${file.relPath}:${imp.line} imports "${truncate(imp.path, 60)}" with multiple parent traversals.`,
|
||||||
|
file: file.absPath,
|
||||||
|
line: imp.line,
|
||||||
|
evidence: `@${imp.path}`,
|
||||||
|
recommendation: 'Consider using absolute paths or moving the imported file closer.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- HTML comment info ---
|
||||||
|
const htmlComments = (content.match(/<!--[\s\S]*?-->/g) || []).length;
|
||||||
|
if (htmlComments > 0) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.info,
|
||||||
|
title: 'Uses HTML comments',
|
||||||
|
description: `${file.relPath} uses ${htmlComments} HTML comment(s). These are stripped before injection, saving tokens.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${htmlComments} HTML comment(s)`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Duplicate content detection (simple: repeated lines) ---
|
||||||
|
const lineArr = content.split('\n');
|
||||||
|
const lineCounts = new Map();
|
||||||
|
for (const l of lineArr) {
|
||||||
|
const trimmed = l.trim();
|
||||||
|
if (trimmed.length > 20 && !trimmed.startsWith('#') && !trimmed.startsWith('|') && !trimmed.startsWith('-')) {
|
||||||
|
lineCounts.set(trimmed, (lineCounts.get(trimmed) || 0) + 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const duplicates = [...lineCounts.entries()].filter(([, count]) => count >= 3);
|
||||||
|
if (duplicates.length > 0) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Repeated content detected',
|
||||||
|
description: `${file.relPath} has ${duplicates.length} line(s) repeated 3+ times.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: truncate(duplicates[0][0], 80),
|
||||||
|
recommendation: 'Extract repeated content into a shared @import or rules file.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- TODO/FIXME markers ---
|
||||||
|
const todos = lineArr.filter(l => /\bTODO\b|\bFIXME\b|\bHACK\b/i.test(l));
|
||||||
|
if (todos.length > 0) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.info,
|
||||||
|
title: 'Contains TODO/FIXME markers',
|
||||||
|
description: `${file.relPath} has ${todos.length} TODO/FIXME/HACK marker(s).`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: truncate(todos[0].trim(), 80),
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
238
plugins/config-audit/scanners/conflict-detector.mjs
Normal file
238
plugins/config-audit/scanners/conflict-detector.mjs
Normal file
|
|
@ -0,0 +1,238 @@
|
||||||
|
/**
|
||||||
|
* CNF Scanner — Conflict Detector
|
||||||
|
* Detects conflicts between config files at different hierarchy levels:
|
||||||
|
* settings key conflicts, permission contradictions, hook duplicates.
|
||||||
|
* Finding IDs: CA-CNF-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readTextFile } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseJson } from './lib/yaml-parser.mjs';
|
||||||
|
import { truncate } from './lib/string-utils.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'CNF';
|
||||||
|
|
||||||
|
// Keys checked separately or not meaningful to compare
|
||||||
|
const SKIP_KEYS = new Set(['$schema', 'hooks', 'permissions']);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract the tool name prefix from a permission rule.
|
||||||
|
* e.g., "Bash(npm run *)" → "Bash", "Read(src/**)" → "Read"
|
||||||
|
* @param {string} rule
|
||||||
|
* @returns {{ tool: string, pattern: string }}
|
||||||
|
*/
|
||||||
|
function parsePermissionRule(rule) {
|
||||||
|
const match = rule.match(/^(\w+)\((.+)\)$/);
|
||||||
|
if (match) return { tool: match[1], pattern: match[2] };
|
||||||
|
return { tool: rule, pattern: '*' };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Flatten an object's top-level keys into a simple key→value map.
|
||||||
|
* Only first level — we compare top-level settings, not nested.
|
||||||
|
* @param {object} obj
|
||||||
|
* @returns {Map<string, string>} key → JSON-stringified value
|
||||||
|
*/
|
||||||
|
function flattenTopLevel(obj) {
|
||||||
|
const map = new Map();
|
||||||
|
for (const [key, value] of Object.entries(obj)) {
|
||||||
|
if (!SKIP_KEYS.has(key)) {
|
||||||
|
map.set(key, JSON.stringify(value));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return map;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Collect hooks from a parsed settings or hooks.json object.
|
||||||
|
* @param {object} parsed
|
||||||
|
* @returns {{ event: string, matcher: string }[]}
|
||||||
|
*/
|
||||||
|
function collectHooks(parsed) {
|
||||||
|
const hooks = parsed.hooks || parsed;
|
||||||
|
if (!hooks || typeof hooks !== 'object' || Array.isArray(hooks)) return [];
|
||||||
|
|
||||||
|
const result = [];
|
||||||
|
for (const [event, handlers] of Object.entries(hooks)) {
|
||||||
|
if (!Array.isArray(handlers)) continue;
|
||||||
|
for (const handler of handlers) {
|
||||||
|
const matcher = typeof handler.matcher === 'string' ? handler.matcher : '*';
|
||||||
|
result.push({ event, matcher });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan for conflicts across configuration scopes.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const findings = [];
|
||||||
|
|
||||||
|
// Collect settings files
|
||||||
|
const settingsFiles = discovery.files.filter(f => f.type === 'settings-json');
|
||||||
|
// Collect hooks files
|
||||||
|
const hooksFiles = discovery.files.filter(f => f.type === 'hooks-json');
|
||||||
|
|
||||||
|
const totalFiles = settingsFiles.length + hooksFiles.length;
|
||||||
|
|
||||||
|
// Need at least 2 files to detect conflicts
|
||||||
|
if (settingsFiles.length < 2 && (settingsFiles.length + hooksFiles.length) < 2) {
|
||||||
|
return scannerResult(SCANNER, 'skipped', [], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Settings key conflicts ---
|
||||||
|
const settingsByScope = []; // [{ scope, file, keys: Map<key, jsonValue> }]
|
||||||
|
|
||||||
|
for (const file of settingsFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (!parsed) continue;
|
||||||
|
settingsByScope.push({
|
||||||
|
scope: file.scope,
|
||||||
|
file: file.relPath,
|
||||||
|
absPath: file.absPath,
|
||||||
|
keys: flattenTopLevel(parsed),
|
||||||
|
raw: parsed,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare keys across scopes
|
||||||
|
if (settingsByScope.length >= 2) {
|
||||||
|
const allKeys = new Set();
|
||||||
|
for (const s of settingsByScope) {
|
||||||
|
for (const key of s.keys.keys()) allKeys.add(key);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const key of allKeys) {
|
||||||
|
const scopesWithKey = settingsByScope.filter(s => s.keys.has(key));
|
||||||
|
if (scopesWithKey.length < 2) continue;
|
||||||
|
|
||||||
|
// Check if values differ
|
||||||
|
const values = new Set(scopesWithKey.map(s => s.keys.get(key)));
|
||||||
|
if (values.size > 1) {
|
||||||
|
const details = scopesWithKey
|
||||||
|
.map(s => `${s.scope} (${s.file}): ${truncate(s.keys.get(key), 40)}`)
|
||||||
|
.join('; ');
|
||||||
|
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: `Settings key conflict: "${key}"`,
|
||||||
|
description: `Key "${key}" has different values across scopes. ${details}`,
|
||||||
|
file: scopesWithKey[0].absPath,
|
||||||
|
evidence: details,
|
||||||
|
recommendation: `Verify the "${key}" value is intentionally different across scopes. The most specific scope wins (local > project > user).`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Permission conflicts ---
|
||||||
|
for (let i = 0; i < settingsByScope.length; i++) {
|
||||||
|
for (let j = i + 1; j < settingsByScope.length; j++) {
|
||||||
|
const a = settingsByScope[i];
|
||||||
|
const b = settingsByScope[j];
|
||||||
|
|
||||||
|
const aPerms = a.raw.permissions || {};
|
||||||
|
const bPerms = b.raw.permissions || {};
|
||||||
|
|
||||||
|
const aAllow = Array.isArray(aPerms.allow) ? aPerms.allow : [];
|
||||||
|
const aDeny = Array.isArray(aPerms.deny) ? aPerms.deny : [];
|
||||||
|
const bAllow = Array.isArray(bPerms.allow) ? bPerms.allow : [];
|
||||||
|
const bDeny = Array.isArray(bPerms.deny) ? bPerms.deny : [];
|
||||||
|
|
||||||
|
// Check: allow in A, deny in B (and vice versa)
|
||||||
|
for (const allowRule of aAllow) {
|
||||||
|
const { tool: aTool, pattern: aPattern } = parsePermissionRule(allowRule);
|
||||||
|
for (const denyRule of bDeny) {
|
||||||
|
const { tool: dTool, pattern: dPattern } = parsePermissionRule(denyRule);
|
||||||
|
if (aTool === dTool && (aPattern === dPattern || aPattern === '*' || dPattern === '*')) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Permission allow/deny conflict',
|
||||||
|
description: `"${allowRule}" is allowed in ${a.scope} (${a.file}) but denied in ${b.scope} (${b.file}).`,
|
||||||
|
file: a.absPath,
|
||||||
|
evidence: `allow: "${allowRule}" (${a.scope}) vs deny: "${denyRule}" (${b.scope})`,
|
||||||
|
recommendation: 'Resolve the conflict. Deny always wins, but the conflicting allow rule is misleading.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reverse: allow in B, deny in A
|
||||||
|
for (const allowRule of bAllow) {
|
||||||
|
const { tool: bTool, pattern: bPattern } = parsePermissionRule(allowRule);
|
||||||
|
for (const denyRule of aDeny) {
|
||||||
|
const { tool: dTool, pattern: dPattern } = parsePermissionRule(denyRule);
|
||||||
|
if (bTool === dTool && (bPattern === dPattern || bPattern === '*' || dPattern === '*')) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Permission allow/deny conflict',
|
||||||
|
description: `"${allowRule}" is allowed in ${b.scope} (${b.file}) but denied in ${a.scope} (${a.file}).`,
|
||||||
|
file: b.absPath,
|
||||||
|
evidence: `allow: "${allowRule}" (${b.scope}) vs deny: "${denyRule}" (${a.scope})`,
|
||||||
|
recommendation: 'Resolve the conflict. Deny always wins, but the conflicting allow rule is misleading.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Hook duplicates (across settings + hooks.json files) ---
|
||||||
|
const hookSources = []; // [{ event, matcher, source }]
|
||||||
|
|
||||||
|
for (const s of settingsByScope) {
|
||||||
|
if (s.raw.hooks) {
|
||||||
|
for (const h of collectHooks(s.raw)) {
|
||||||
|
hookSources.push({ ...h, source: `${s.scope}:${s.file}` });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const file of hooksFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (!parsed) continue;
|
||||||
|
const hookData = parsed.hooks || parsed;
|
||||||
|
for (const h of collectHooks(hookData)) {
|
||||||
|
hookSources.push({ ...h, source: `hooks:${file.relPath}` });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Group by event:matcher
|
||||||
|
const hookGroups = new Map();
|
||||||
|
for (const h of hookSources) {
|
||||||
|
const key = `${h.event}:${h.matcher}`;
|
||||||
|
if (!hookGroups.has(key)) hookGroups.set(key, []);
|
||||||
|
hookGroups.get(key).push(h.source);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [key, sources] of hookGroups) {
|
||||||
|
// Only flag duplicates from DIFFERENT sources
|
||||||
|
const uniqueSources = [...new Set(sources)];
|
||||||
|
if (uniqueSources.length >= 2) {
|
||||||
|
const [event, matcher] = key.split(':');
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Duplicate hook definition',
|
||||||
|
description: `Hook "${event}" with matcher "${matcher}" is defined in ${uniqueSources.length} sources.`,
|
||||||
|
evidence: uniqueSources.join(', '),
|
||||||
|
recommendation: 'Consolidate hook definitions to avoid unexpected execution order.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, totalFiles, Date.now() - start);
|
||||||
|
}
|
||||||
130
plugins/config-audit/scanners/drift-cli.mjs
Normal file
130
plugins/config-audit/scanners/drift-cli.mjs
Normal file
|
|
@ -0,0 +1,130 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Config-Audit Drift CLI
|
||||||
|
* Compare current configuration against a saved baseline.
|
||||||
|
* Usage:
|
||||||
|
* node drift-cli.mjs <path> --save [--name my-baseline]
|
||||||
|
* node drift-cli.mjs <path> [--baseline my-baseline] [--json]
|
||||||
|
* node drift-cli.mjs --list
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve } from 'node:path';
|
||||||
|
import { runAllScanners } from './scan-orchestrator.mjs';
|
||||||
|
import { diffEnvelopes, formatDiffReport } from './lib/diff-engine.mjs';
|
||||||
|
import { saveBaseline, loadBaseline, listBaselines } from './lib/baseline.mjs';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const args = process.argv.slice(2);
|
||||||
|
let targetPath = '.';
|
||||||
|
let baselineName = 'default';
|
||||||
|
let save = false;
|
||||||
|
let list = false;
|
||||||
|
let jsonMode = false;
|
||||||
|
let includeGlobal = false;
|
||||||
|
|
||||||
|
for (let i = 0; i < args.length; i++) {
|
||||||
|
if (args[i] === '--save') {
|
||||||
|
save = true;
|
||||||
|
} else if (args[i] === '--name' && args[i + 1]) {
|
||||||
|
baselineName = args[++i];
|
||||||
|
} else if (args[i] === '--baseline' && args[i + 1]) {
|
||||||
|
baselineName = args[++i];
|
||||||
|
} else if (args[i] === '--list') {
|
||||||
|
list = true;
|
||||||
|
} else if (args[i] === '--json') {
|
||||||
|
jsonMode = true;
|
||||||
|
} else if (args[i] === '--global') {
|
||||||
|
includeGlobal = true;
|
||||||
|
} else if (!args[i].startsWith('-')) {
|
||||||
|
targetPath = args[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- List mode ---
|
||||||
|
if (list) {
|
||||||
|
const result = await listBaselines();
|
||||||
|
if (jsonMode) {
|
||||||
|
process.stdout.write(JSON.stringify(result, null, 2) + '\n');
|
||||||
|
} else {
|
||||||
|
if (result.baselines.length === 0) {
|
||||||
|
process.stderr.write('No baselines saved.\n');
|
||||||
|
process.stderr.write('Save one with: node drift-cli.mjs <path> --save\n');
|
||||||
|
} else {
|
||||||
|
process.stderr.write('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n');
|
||||||
|
process.stderr.write(' Saved Baselines\n');
|
||||||
|
process.stderr.write('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n');
|
||||||
|
for (const b of result.baselines) {
|
||||||
|
process.stderr.write(` ${b.name.padEnd(20)} ${b.findingCount} findings ${b.savedAt}\n`);
|
||||||
|
}
|
||||||
|
process.stderr.write('\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Save mode ---
|
||||||
|
if (save) {
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`Config-Audit Drift CLI v2.1.0\n`);
|
||||||
|
process.stderr.write(`Saving baseline "${baselineName}" for ${resolve(targetPath)}\n\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const envelope = await runAllScanners(targetPath, { includeGlobal });
|
||||||
|
const result = await saveBaseline(envelope, baselineName);
|
||||||
|
|
||||||
|
if (jsonMode) {
|
||||||
|
process.stdout.write(JSON.stringify({ saved: true, name: result.name, path: result.path }, null, 2) + '\n');
|
||||||
|
} else {
|
||||||
|
process.stderr.write(`\nBaseline "${result.name}" saved to ${result.path}\n`);
|
||||||
|
process.stderr.write(`Findings: ${envelope.aggregate.total_findings}\n`);
|
||||||
|
}
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Drift mode (default) ---
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`Config-Audit Drift CLI v2.1.0\n`);
|
||||||
|
process.stderr.write(`Target: ${resolve(targetPath)}\n`);
|
||||||
|
process.stderr.write(`Baseline: ${baselineName}\n\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load baseline
|
||||||
|
const baseline = await loadBaseline(baselineName);
|
||||||
|
if (!baseline) {
|
||||||
|
if (jsonMode) {
|
||||||
|
process.stdout.write(JSON.stringify({ error: `Baseline "${baselineName}" not found. Save one with --save.` }, null, 2) + '\n');
|
||||||
|
} else {
|
||||||
|
process.stderr.write(`Baseline "${baselineName}" not found.\n`);
|
||||||
|
process.stderr.write(`Save one first: node drift-cli.mjs <path> --save\n`);
|
||||||
|
}
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run current scan
|
||||||
|
const current = await runAllScanners(targetPath, { includeGlobal });
|
||||||
|
|
||||||
|
// Diff
|
||||||
|
const diff = diffEnvelopes(baseline, current);
|
||||||
|
|
||||||
|
if (jsonMode) {
|
||||||
|
process.stdout.write(JSON.stringify(diff, null, 2) + '\n');
|
||||||
|
} else {
|
||||||
|
const report = formatDiffReport(diff);
|
||||||
|
process.stderr.write('\n' + report + '\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exit code: 0=stable/improving, 1=degrading
|
||||||
|
if (diff.summary.trend === 'degrading') process.exit(1);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only run CLI if invoked directly
|
||||||
|
const isDirectRun = process.argv[1] && resolve(process.argv[1]) === resolve(new URL(import.meta.url).pathname);
|
||||||
|
if (isDirectRun) {
|
||||||
|
main().catch(err => {
|
||||||
|
process.stderr.write(`Fatal: ${err.message}\n`);
|
||||||
|
process.exit(3);
|
||||||
|
});
|
||||||
|
}
|
||||||
410
plugins/config-audit/scanners/feature-gap-scanner.mjs
Normal file
410
plugins/config-audit/scanners/feature-gap-scanner.mjs
Normal file
|
|
@ -0,0 +1,410 @@
|
||||||
|
/**
|
||||||
|
* GAP Scanner — Feature Gap Scanner
|
||||||
|
* Compares actual configuration against complete Claude Code feature register.
|
||||||
|
* 25 gap dimensions across 4 tiers. Always runs with includeGlobal: true.
|
||||||
|
* Finding IDs: CA-GAP-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve } from 'node:path';
|
||||||
|
import { readTextFile, discoverConfigFiles } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { findImports, parseJson, parseFrontmatter } from './lib/yaml-parser.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'GAP';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @typedef {object} GapCheck
|
||||||
|
* @property {string} id - Short identifier (t1_1 through t4_5)
|
||||||
|
* @property {string} tier - t1|t2|t3|t4
|
||||||
|
* @property {string} title - Human-readable title
|
||||||
|
* @property {string} recommendation - What to do
|
||||||
|
* @property {(ctx: CheckContext) => Promise<boolean>} check - Returns true if feature IS present
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @typedef {object} CheckContext
|
||||||
|
* @property {import('./lib/file-discovery.mjs').ConfigFile[]} files
|
||||||
|
* @property {string} targetPath
|
||||||
|
* @property {Map<string, object>} parsedSettings - scope → parsed JSON
|
||||||
|
* @property {Map<string, string>} fileContents - absPath → content
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a file belongs to the target project (vs global ~/.claude/).
|
||||||
|
* Needed because scope classification can be 'plugin' when running inside ~/.claude/plugins/.
|
||||||
|
* @param {CheckContext} ctx
|
||||||
|
* @param {import('./lib/file-discovery.mjs').ConfigFile} f
|
||||||
|
* @returns {boolean}
|
||||||
|
*/
|
||||||
|
function isTargetLocal(ctx, f) {
|
||||||
|
return f.absPath.startsWith(ctx.targetPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
const TIER_SEVERITY = {
|
||||||
|
t1: SEVERITY.medium,
|
||||||
|
t2: SEVERITY.low,
|
||||||
|
t3: SEVERITY.info,
|
||||||
|
t4: SEVERITY.info,
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Lazily read and cache file content.
|
||||||
|
* @param {CheckContext} ctx
|
||||||
|
* @param {string} absPath
|
||||||
|
* @returns {Promise<string|null>}
|
||||||
|
*/
|
||||||
|
async function getContent(ctx, absPath) {
|
||||||
|
if (ctx.fileContents.has(absPath)) return ctx.fileContents.get(absPath);
|
||||||
|
const content = await readTextFile(absPath);
|
||||||
|
ctx.fileContents.set(absPath, content);
|
||||||
|
return content;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if any settings file has a specific key.
|
||||||
|
* @param {CheckContext} ctx
|
||||||
|
* @param {string} key
|
||||||
|
* @returns {boolean}
|
||||||
|
*/
|
||||||
|
function anySettingsHas(ctx, key) {
|
||||||
|
for (const parsed of ctx.parsedSettings.values()) {
|
||||||
|
if (parsed && key in parsed) return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get a value from any settings file (first match).
|
||||||
|
* @param {CheckContext} ctx
|
||||||
|
* @param {string} key
|
||||||
|
* @returns {*}
|
||||||
|
*/
|
||||||
|
function getSettingsValue(ctx, key) {
|
||||||
|
for (const parsed of ctx.parsedSettings.values()) {
|
||||||
|
if (parsed && key in parsed) return parsed[key];
|
||||||
|
}
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** @type {GapCheck[]} */
|
||||||
|
const GAP_CHECKS = [
|
||||||
|
// --- Tier 1: Foundation ---
|
||||||
|
{
|
||||||
|
id: 't1_1', tier: 't1',
|
||||||
|
title: 'No CLAUDE.md file',
|
||||||
|
recommendation: 'Create a CLAUDE.md at the project root with project-specific instructions, commands, and architecture.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'claude-md' && isTargetLocal(ctx, f)),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't1_2', tier: 't1',
|
||||||
|
title: 'No permissions configured',
|
||||||
|
recommendation: 'Add permissions.allow and permissions.deny in .claude/settings.json to control tool access.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
for (const parsed of ctx.parsedSettings.values()) {
|
||||||
|
if (parsed?.permissions && (parsed.permissions.allow?.length > 0 || parsed.permissions.deny?.length > 0)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't1_3', tier: 't1',
|
||||||
|
title: 'No hooks configured',
|
||||||
|
recommendation: 'Add at least one hook (e.g., PreToolUse for security, Stop for session summaries). See knowledge/hook-events-reference.md.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
if (ctx.files.some(f => f.type === 'hooks-json')) return true;
|
||||||
|
for (const parsed of ctx.parsedSettings.values()) {
|
||||||
|
if (parsed?.hooks && typeof parsed.hooks === 'object' && !Array.isArray(parsed.hooks)) {
|
||||||
|
return Object.keys(parsed.hooks).length > 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't1_4', tier: 't1',
|
||||||
|
title: 'No custom skills or commands',
|
||||||
|
recommendation: 'Create project-specific skills in .claude/skills/ or commands in .claude/commands/ to automate repetitive workflows.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'skill-md' || f.type === 'command-md'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't1_5', tier: 't1',
|
||||||
|
title: 'No MCP servers configured',
|
||||||
|
recommendation: 'Add a .mcp.json at the project root to configure MCP servers for enhanced tool access.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'mcp-json'),
|
||||||
|
},
|
||||||
|
|
||||||
|
// --- Tier 2: Configuration Depth ---
|
||||||
|
{
|
||||||
|
id: 't2_1', tier: 't2',
|
||||||
|
title: 'Settings only at one scope',
|
||||||
|
recommendation: 'Use all 3 settings scopes: ~/.claude/settings.json (user), .claude/settings.json (project), .claude/settings.local.json (local/personal).',
|
||||||
|
check: async (ctx) => {
|
||||||
|
const localSettings = ctx.files.filter(f => f.type === 'settings-json' && isTargetLocal(ctx, f));
|
||||||
|
const hasGlobal = ctx.files.some(f => f.type === 'settings-json' && !isTargetLocal(ctx, f));
|
||||||
|
return (localSettings.length >= 2) || (localSettings.length >= 1 && hasGlobal);
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't2_2', tier: 't2',
|
||||||
|
title: 'CLAUDE.md not modular',
|
||||||
|
recommendation: 'Use @imports or .claude/rules/ to split large CLAUDE.md files into focused modules.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
// Has rules files OR has @imports in any CLAUDE.md
|
||||||
|
if (ctx.files.some(f => f.type === 'rule')) return true;
|
||||||
|
for (const file of ctx.files.filter(f => f.type === 'claude-md')) {
|
||||||
|
const content = await getContent(ctx, file.absPath);
|
||||||
|
if (content && findImports(content).length > 0) return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't2_3', tier: 't2',
|
||||||
|
title: 'No path-scoped rules',
|
||||||
|
recommendation: 'Create .claude/rules/*.md with paths: frontmatter to apply rules only to matching files.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
for (const file of ctx.files.filter(f => f.type === 'rule')) {
|
||||||
|
const content = await getContent(ctx, file.absPath);
|
||||||
|
if (content) {
|
||||||
|
const { frontmatter } = parseFrontmatter(content);
|
||||||
|
if (frontmatter && (frontmatter.paths || frontmatter.globs)) return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't2_4', tier: 't2',
|
||||||
|
title: 'Auto-memory explicitly disabled',
|
||||||
|
recommendation: 'Enable auto-memory by removing autoMemoryEnabled: false from settings.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
// Present (gap) only if explicitly disabled
|
||||||
|
const val = getSettingsValue(ctx, 'autoMemoryEnabled');
|
||||||
|
return val !== false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't2_5', tier: 't2',
|
||||||
|
title: 'Low hook diversity',
|
||||||
|
recommendation: 'Use hooks across 3+ events (e.g., SessionStart, PreToolUse, Stop) for comprehensive automation.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
const events = new Set();
|
||||||
|
for (const parsed of ctx.parsedSettings.values()) {
|
||||||
|
if (parsed?.hooks && typeof parsed.hooks === 'object' && !Array.isArray(parsed.hooks)) {
|
||||||
|
for (const event of Object.keys(parsed.hooks)) events.add(event);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for (const file of ctx.files.filter(f => f.type === 'hooks-json')) {
|
||||||
|
const content = await getContent(ctx, file.absPath);
|
||||||
|
if (content) {
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
const hookData = parsed?.hooks || parsed;
|
||||||
|
if (hookData && typeof hookData === 'object' && !Array.isArray(hookData)) {
|
||||||
|
for (const event of Object.keys(hookData)) events.add(event);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return events.size >= 3;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't2_6', tier: 't2',
|
||||||
|
title: 'No custom subagents',
|
||||||
|
recommendation: 'Create custom agents in .claude/agents/ or ~/.claude/agents/ with specialized tools and model selection.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'agent-md'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't2_7', tier: 't2',
|
||||||
|
title: 'No model configuration',
|
||||||
|
recommendation: 'Set model preferences in settings.json (model, modelOverrides) for cost/quality optimization.',
|
||||||
|
check: async (ctx) => anySettingsHas(ctx, 'model') || anySettingsHas(ctx, 'modelOverrides'),
|
||||||
|
},
|
||||||
|
|
||||||
|
// --- Tier 3: Advanced Features ---
|
||||||
|
{
|
||||||
|
id: 't3_1', tier: 't3',
|
||||||
|
title: 'No status line configured',
|
||||||
|
recommendation: 'Configure statusLine in settings.json to show context window usage, cost, and model info.',
|
||||||
|
check: async (ctx) => anySettingsHas(ctx, 'statusLine'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_2', tier: 't3',
|
||||||
|
title: 'No custom keybindings',
|
||||||
|
recommendation: 'Create ~/.claude/keybindings.json to customize keyboard shortcuts (e.g., bind chat:newline to Shift+Enter).',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'keybindings-json'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_3', tier: 't3',
|
||||||
|
title: 'Using default output style',
|
||||||
|
recommendation: 'Try "Explanatory" or "Learning" output styles, or create custom styles in .claude/output-styles/.',
|
||||||
|
check: async (ctx) => anySettingsHas(ctx, 'outputStyle'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_4', tier: 't3',
|
||||||
|
title: 'No worktree workflow',
|
||||||
|
recommendation: 'Use --worktree for parallel feature development. Configure worktree.symlinkDirectories for node_modules.',
|
||||||
|
check: async (ctx) => anySettingsHas(ctx, 'worktree'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_5', tier: 't3',
|
||||||
|
title: 'No advanced skill frontmatter',
|
||||||
|
recommendation: 'Use disable-model-invocation, context:fork, or argument-hint in skill frontmatter for better control.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
for (const file of ctx.files.filter(f => f.type === 'skill-md')) {
|
||||||
|
const content = await getContent(ctx, file.absPath);
|
||||||
|
if (content) {
|
||||||
|
const { frontmatter } = parseFrontmatter(content);
|
||||||
|
if (frontmatter && (
|
||||||
|
frontmatter.disable_model_invocation ||
|
||||||
|
frontmatter.context === 'fork' ||
|
||||||
|
frontmatter.argument_hint
|
||||||
|
)) return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_6', tier: 't3',
|
||||||
|
title: 'No subagent isolation',
|
||||||
|
recommendation: 'Use isolation: worktree in agent frontmatter for safe parallel development.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
for (const file of ctx.files.filter(f => f.type === 'agent-md')) {
|
||||||
|
const content = await getContent(ctx, file.absPath);
|
||||||
|
if (content) {
|
||||||
|
const { frontmatter } = parseFrontmatter(content);
|
||||||
|
if (frontmatter && frontmatter.isolation === 'worktree') return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_7', tier: 't3',
|
||||||
|
title: 'No dynamic skill context',
|
||||||
|
recommendation: 'Use !`command` syntax in skills to inject dynamic context (e.g., !`git branch --show-current`).',
|
||||||
|
check: async (ctx) => {
|
||||||
|
for (const file of ctx.files.filter(f => f.type === 'skill-md' || f.type === 'command-md')) {
|
||||||
|
const content = await getContent(ctx, file.absPath);
|
||||||
|
if (content && /!`[^`]+`/.test(content)) return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't3_8', tier: 't3',
|
||||||
|
title: 'No autoMode classifier',
|
||||||
|
recommendation: 'Configure autoMode in user/local settings with environment context and allow/deny rules.',
|
||||||
|
check: async (ctx) => anySettingsHas(ctx, 'autoMode'),
|
||||||
|
},
|
||||||
|
|
||||||
|
// --- Tier 4: Team/Enterprise ---
|
||||||
|
{
|
||||||
|
id: 't4_1', tier: 't4',
|
||||||
|
title: 'No project .mcp.json in git',
|
||||||
|
recommendation: 'Add .mcp.json to git so the team shares MCP server configuration.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'mcp-json' && isTargetLocal(ctx, f)),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't4_2', tier: 't4',
|
||||||
|
title: 'No custom plugin',
|
||||||
|
recommendation: 'Package reusable skills, agents, and hooks as a Claude Code plugin with .claude-plugin/plugin.json.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.type === 'plugin-json'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't4_3', tier: 't4',
|
||||||
|
title: 'Agent teams not enabled',
|
||||||
|
recommendation: 'Enable agent teams with CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 for parallel multi-agent workflows.',
|
||||||
|
check: async (ctx) => {
|
||||||
|
for (const parsed of ctx.parsedSettings.values()) {
|
||||||
|
const env = parsed?.env;
|
||||||
|
if (env && env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS === '1') return true;
|
||||||
|
}
|
||||||
|
return !!process.env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS;
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't4_4', tier: 't4',
|
||||||
|
title: 'No managed settings',
|
||||||
|
recommendation: 'Use managed-settings.json for organization-wide policy enforcement.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.scope === 'managed'),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: 't4_5', tier: 't4',
|
||||||
|
title: 'No LSP plugins',
|
||||||
|
recommendation: 'Add .lsp.json for real-time code intelligence from language servers.',
|
||||||
|
check: async (ctx) => ctx.files.some(f => f.relPath.endsWith('.lsp.json')),
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan for feature gaps against Claude Code feature register.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} sharedDiscovery - Used when provided with files; otherwise runs own discovery with includeGlobal
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, sharedDiscovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const findings = [];
|
||||||
|
|
||||||
|
// Use shared discovery if it has files (e.g. from full-machine mode), otherwise run own
|
||||||
|
const discovery = (sharedDiscovery && sharedDiscovery.files && sharedDiscovery.files.length > 0)
|
||||||
|
? sharedDiscovery
|
||||||
|
: await discoverConfigFiles(resolve(targetPath), { includeGlobal: true });
|
||||||
|
|
||||||
|
// Parse all settings files upfront
|
||||||
|
const parsedSettings = new Map();
|
||||||
|
for (const file of discovery.files.filter(f => f.type === 'settings-json')) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (content) {
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
parsedSettings.set(`${file.scope}:${file.relPath}`, parsed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const ctx = {
|
||||||
|
files: discovery.files,
|
||||||
|
targetPath: resolve(targetPath),
|
||||||
|
parsedSettings,
|
||||||
|
fileContents: new Map(),
|
||||||
|
};
|
||||||
|
|
||||||
|
for (const gap of GAP_CHECKS) {
|
||||||
|
const present = await gap.check(ctx);
|
||||||
|
if (!present) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: TIER_SEVERITY[gap.tier],
|
||||||
|
title: gap.title,
|
||||||
|
description: `Feature gap: ${gap.title}. ${gap.recommendation}`,
|
||||||
|
recommendation: gap.recommendation,
|
||||||
|
category: gap.tier,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const filesScanned = discovery.files.length;
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Group GAP findings into impact categories for opportunity-based display.
|
||||||
|
* @param {object[]} findings - GAP scanner findings (each has .category = t1|t2|t3|t4)
|
||||||
|
* @returns {{ highImpact: object[], mediumImpact: object[], explore: object[] }}
|
||||||
|
*/
|
||||||
|
export function opportunitySummary(findings) {
|
||||||
|
const highImpact = [];
|
||||||
|
const mediumImpact = [];
|
||||||
|
const explore = [];
|
||||||
|
|
||||||
|
for (const f of findings) {
|
||||||
|
if (f.category === 't1') highImpact.push(f);
|
||||||
|
else if (f.category === 't2') mediumImpact.push(f);
|
||||||
|
else explore.push(f);
|
||||||
|
}
|
||||||
|
|
||||||
|
return { highImpact, mediumImpact, explore };
|
||||||
|
}
|
||||||
186
plugins/config-audit/scanners/fix-cli.mjs
Normal file
186
plugins/config-audit/scanners/fix-cli.mjs
Normal file
|
|
@ -0,0 +1,186 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Config-Audit Fix CLI
|
||||||
|
* Standalone entry point for running fixes without the command.
|
||||||
|
* Usage: node fix-cli.mjs <path> [--apply] [--global] [--json]
|
||||||
|
* Dry-run by default — must pass --apply to write changes.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve } from 'node:path';
|
||||||
|
import { runAllScanners } from './scan-orchestrator.mjs';
|
||||||
|
import { planFixes, applyFixes, verifyFixes } from './fix-engine.mjs';
|
||||||
|
import { createBackup } from './lib/backup.mjs';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const args = process.argv.slice(2);
|
||||||
|
let targetPath = '.';
|
||||||
|
let apply = false;
|
||||||
|
let jsonMode = false;
|
||||||
|
let includeGlobal = false;
|
||||||
|
|
||||||
|
for (let i = 0; i < args.length; i++) {
|
||||||
|
if (args[i] === '--apply') {
|
||||||
|
apply = true;
|
||||||
|
} else if (args[i] === '--json') {
|
||||||
|
jsonMode = true;
|
||||||
|
} else if (args[i] === '--global') {
|
||||||
|
includeGlobal = true;
|
||||||
|
} else if (!args[i].startsWith('-')) {
|
||||||
|
targetPath = args[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const resolvedPath = resolve(targetPath);
|
||||||
|
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`Config-Audit Fix CLI v2.1.0\n`);
|
||||||
|
process.stderr.write(`Target: ${resolvedPath}\n`);
|
||||||
|
process.stderr.write(`Mode: ${apply ? 'APPLY' : 'DRY-RUN'}\n\n`);
|
||||||
|
process.stderr.write(`Scanning...\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 1. Run all scanners
|
||||||
|
const envelope = await runAllScanners(targetPath, { includeGlobal });
|
||||||
|
|
||||||
|
// 2. Plan fixes
|
||||||
|
const { fixes, skipped, manual } = planFixes(envelope);
|
||||||
|
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`\n`);
|
||||||
|
process.stderr.write(`━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n`);
|
||||||
|
process.stderr.write(` Config-Audit Fix Plan\n`);
|
||||||
|
process.stderr.write(`━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n`);
|
||||||
|
|
||||||
|
if (fixes.length > 0) {
|
||||||
|
process.stderr.write(` Auto-fixable (${fixes.length}):\n`);
|
||||||
|
for (let i = 0; i < fixes.length; i++) {
|
||||||
|
process.stderr.write(` ${i + 1}. [${fixes[i].findingId}] ${fixes[i].description}\n`);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
process.stderr.write(` No auto-fixable issues found.\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (manual.length > 0) {
|
||||||
|
process.stderr.write(`\n Manual (${manual.length}):\n`);
|
||||||
|
for (let i = 0; i < manual.length; i++) {
|
||||||
|
process.stderr.write(` ${fixes.length + i + 1}. [${manual[i].findingId}] ${manual[i].title}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (skipped.length > 0) {
|
||||||
|
process.stderr.write(`\n Skipped (${skipped.length}): could not generate fix plan\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
process.stderr.write(`\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Apply or dry-run
|
||||||
|
let applied = [];
|
||||||
|
let failed = [];
|
||||||
|
let verified = [];
|
||||||
|
let regressions = [];
|
||||||
|
let backupId = null;
|
||||||
|
|
||||||
|
if (fixes.length === 0) {
|
||||||
|
if (jsonMode) {
|
||||||
|
const output = { planned: [], applied: [], failed: [], verified: [], regressions: [], manual, backupId: null };
|
||||||
|
process.stdout.write(JSON.stringify(output, null, 2) + '\n');
|
||||||
|
}
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (apply) {
|
||||||
|
// Create backup first
|
||||||
|
const filesToBackup = [...new Set(fixes.filter(f => f.type !== 'file-rename').map(f => f.file))];
|
||||||
|
const backup = createBackup(filesToBackup);
|
||||||
|
backupId = backup.backupId;
|
||||||
|
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`\n Backup created: ${backup.backupPath}\n`);
|
||||||
|
process.stderr.write(` Applying ${fixes.length} fixes...\n\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const result = await applyFixes(fixes, { dryRun: false, backupDir: backup.backupPath });
|
||||||
|
applied = result.applied;
|
||||||
|
failed = result.failed;
|
||||||
|
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(` Results: ${applied.length} applied, ${failed.length} failed\n`);
|
||||||
|
if (failed.length > 0) {
|
||||||
|
for (const f of failed) {
|
||||||
|
process.stderr.write(` FAILED: [${f.findingId}] ${f.error}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Verify
|
||||||
|
if (applied.length > 0) {
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`\n Verifying...\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const verification = await verifyFixes(envelope, applied);
|
||||||
|
verified = verification.verified;
|
||||||
|
regressions = verification.regressions;
|
||||||
|
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(` Verified: ${verified.length}/${applied.length}\n`);
|
||||||
|
if (regressions.length > 0) {
|
||||||
|
process.stderr.write(` Regressions: ${regressions.join(', ')}\n`);
|
||||||
|
}
|
||||||
|
process.stderr.write(`\n Rollback: node scanners/rollback-cli.mjs ${backupId}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Dry-run mode
|
||||||
|
const result = await applyFixes(fixes, { dryRun: true });
|
||||||
|
applied = result.applied;
|
||||||
|
|
||||||
|
if (!jsonMode) {
|
||||||
|
process.stderr.write(`\n Dry-run complete. Pass --apply to execute.\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// JSON output
|
||||||
|
if (jsonMode) {
|
||||||
|
const output = {
|
||||||
|
planned: fixes.map(f => ({
|
||||||
|
findingId: f.findingId,
|
||||||
|
file: f.file,
|
||||||
|
type: f.type,
|
||||||
|
description: f.description,
|
||||||
|
})),
|
||||||
|
applied: applied.map(a => ({
|
||||||
|
findingId: a.findingId,
|
||||||
|
file: a.file,
|
||||||
|
status: a.status,
|
||||||
|
})),
|
||||||
|
failed: failed.map(f => ({
|
||||||
|
findingId: f.findingId,
|
||||||
|
file: f.file,
|
||||||
|
status: f.status,
|
||||||
|
error: f.error,
|
||||||
|
})),
|
||||||
|
verified,
|
||||||
|
regressions,
|
||||||
|
manual: manual.map(m => ({
|
||||||
|
findingId: m.findingId,
|
||||||
|
title: m.title,
|
||||||
|
recommendation: m.recommendation,
|
||||||
|
})),
|
||||||
|
backupId,
|
||||||
|
};
|
||||||
|
process.stdout.write(JSON.stringify(output, null, 2) + '\n');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only run CLI if invoked directly
|
||||||
|
const isDirectRun = process.argv[1] && resolve(process.argv[1]) === resolve(new URL(import.meta.url).pathname);
|
||||||
|
if (isDirectRun) {
|
||||||
|
main().catch(err => {
|
||||||
|
process.stderr.write(`Fatal: ${err.message}\n`);
|
||||||
|
process.exit(3);
|
||||||
|
});
|
||||||
|
}
|
||||||
666
plugins/config-audit/scanners/fix-engine.mjs
Normal file
666
plugins/config-audit/scanners/fix-engine.mjs
Normal file
|
|
@ -0,0 +1,666 @@
|
||||||
|
/**
|
||||||
|
* Config-Audit Fix Engine
|
||||||
|
* Deterministic fix engine: maps scanner findings to concrete file changes.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readFile, writeFile, rename, stat } from 'node:fs/promises';
|
||||||
|
import { dirname } from 'node:path';
|
||||||
|
import { parseJson, parseFrontmatter } from './lib/yaml-parser.mjs';
|
||||||
|
import { createBackup } from './lib/backup.mjs';
|
||||||
|
import { runAllScanners } from './scan-orchestrator.mjs';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fix type constants.
|
||||||
|
*/
|
||||||
|
const FIX_TYPES = {
|
||||||
|
JSON_KEY_ADD: 'json-key-add',
|
||||||
|
JSON_KEY_REMOVE: 'json-key-remove',
|
||||||
|
JSON_KEY_TYPE_FIX: 'json-key-type-fix',
|
||||||
|
JSON_RESTRUCTURE: 'json-restructure',
|
||||||
|
FRONTMATTER_RENAME: 'frontmatter-rename',
|
||||||
|
FILE_RENAME: 'file-rename',
|
||||||
|
};
|
||||||
|
|
||||||
|
/** Valid effortLevel values for nearest-match */
|
||||||
|
const VALID_EFFORT_LEVELS = ['low', 'medium', 'high', 'max'];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Plan fixes from a scanner envelope.
|
||||||
|
* @param {object} envelope - Full scanner envelope from scan-orchestrator
|
||||||
|
* @returns {{ fixes: object[], skipped: object[], manual: object[] }}
|
||||||
|
*/
|
||||||
|
export function planFixes(envelope) {
|
||||||
|
const fixes = [];
|
||||||
|
const skipped = [];
|
||||||
|
const manual = [];
|
||||||
|
|
||||||
|
for (const scanner of envelope.scanners) {
|
||||||
|
for (const finding of scanner.findings) {
|
||||||
|
if (!finding.autoFixable) {
|
||||||
|
manual.push({
|
||||||
|
findingId: finding.id,
|
||||||
|
title: finding.title,
|
||||||
|
file: finding.file,
|
||||||
|
recommendation: finding.recommendation,
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const fixPlan = createFixPlan(finding);
|
||||||
|
if (fixPlan) {
|
||||||
|
fixes.push(fixPlan);
|
||||||
|
} else {
|
||||||
|
skipped.push(finding);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort fixes by severity weight (critical first)
|
||||||
|
const severityOrder = { critical: 0, high: 1, medium: 2, low: 3, info: 4 };
|
||||||
|
fixes.sort((a, b) => (severityOrder[a.severity] || 4) - (severityOrder[b.severity] || 4));
|
||||||
|
|
||||||
|
return { fixes, skipped, manual };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a fix plan for a single finding.
|
||||||
|
* @param {object} finding
|
||||||
|
* @returns {object|null}
|
||||||
|
*/
|
||||||
|
function createFixPlan(finding) {
|
||||||
|
if (!finding.file) return null;
|
||||||
|
|
||||||
|
const base = {
|
||||||
|
findingId: finding.id,
|
||||||
|
file: finding.file,
|
||||||
|
severity: finding.severity,
|
||||||
|
description: '',
|
||||||
|
before: null,
|
||||||
|
after: null,
|
||||||
|
type: null,
|
||||||
|
};
|
||||||
|
|
||||||
|
const scanner = finding.scanner;
|
||||||
|
const title = finding.title;
|
||||||
|
|
||||||
|
// --- SET scanner fixes ---
|
||||||
|
if (scanner === 'SET') {
|
||||||
|
if (title === 'Missing $schema reference') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_KEY_ADD,
|
||||||
|
description: 'Add $schema reference for IDE autocomplete',
|
||||||
|
before: '(no $schema key)',
|
||||||
|
after: '"$schema": "https://json.schemastore.org/claude-code-settings.json"',
|
||||||
|
key: '$schema',
|
||||||
|
value: 'https://json.schemastore.org/claude-code-settings.json',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (title === 'Deprecated settings key') {
|
||||||
|
const key = extractKeyFromEvidence(finding.evidence);
|
||||||
|
if (!key) return null;
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_KEY_REMOVE,
|
||||||
|
description: `Remove deprecated key "${key}"`,
|
||||||
|
before: finding.evidence,
|
||||||
|
after: '(key removed)',
|
||||||
|
key,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (title === 'Type mismatch in settings') {
|
||||||
|
const key = extractKeyFromEvidence(finding.evidence);
|
||||||
|
if (!key) return null;
|
||||||
|
const expectedType = extractExpectedType(finding.description);
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_KEY_TYPE_FIX,
|
||||||
|
description: `Fix type of "${key}" to ${expectedType}`,
|
||||||
|
before: finding.evidence,
|
||||||
|
after: `(converted to ${expectedType})`,
|
||||||
|
key,
|
||||||
|
expectedType,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (title === 'Invalid effortLevel value') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_KEY_TYPE_FIX,
|
||||||
|
description: 'Fix effortLevel to nearest valid value',
|
||||||
|
before: finding.evidence,
|
||||||
|
after: '(nearest valid effortLevel)',
|
||||||
|
key: 'effortLevel',
|
||||||
|
expectedType: 'effortLevel',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (title === 'Hooks configured as array instead of object') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_RESTRUCTURE,
|
||||||
|
description: 'Convert hooks from array to object format',
|
||||||
|
before: '"hooks": [...]',
|
||||||
|
after: '"hooks": { ... }',
|
||||||
|
restructureType: 'hooks-array-to-object',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- HKV scanner fixes ---
|
||||||
|
if (scanner === 'HKV') {
|
||||||
|
if (title === 'Matcher must be a string, not an object') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_RESTRUCTURE,
|
||||||
|
description: 'Convert matcher from object to string',
|
||||||
|
before: finding.evidence,
|
||||||
|
after: '"matcher": "ToolName"',
|
||||||
|
restructureType: 'matcher-object-to-string',
|
||||||
|
event: extractEventFromDescription(finding.description),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (title === 'Hook timeout must be a number') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.JSON_KEY_TYPE_FIX,
|
||||||
|
description: 'Convert timeout to number',
|
||||||
|
before: finding.evidence,
|
||||||
|
after: '(parsed to number)',
|
||||||
|
key: 'timeout',
|
||||||
|
expectedType: 'number',
|
||||||
|
event: extractEventFromDescription(finding.description),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- RUL scanner fixes ---
|
||||||
|
if (scanner === 'RUL') {
|
||||||
|
if (title === 'Rule uses deprecated "globs" field') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.FRONTMATTER_RENAME,
|
||||||
|
description: 'Rename "globs" to "paths" in frontmatter',
|
||||||
|
before: 'globs:',
|
||||||
|
after: 'paths:',
|
||||||
|
oldField: 'globs',
|
||||||
|
newField: 'paths',
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
if (title === 'Rule file is not .md') {
|
||||||
|
const newPath = finding.file.replace(/\.[^.]+$/, '.md');
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
type: FIX_TYPES.FILE_RENAME,
|
||||||
|
description: `Rename to .md extension`,
|
||||||
|
before: finding.file,
|
||||||
|
after: newPath,
|
||||||
|
newPath,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Apply planned fixes to files.
|
||||||
|
* @param {object[]} fixPlans - Array of fix plans from planFixes()
|
||||||
|
* @param {object} opts
|
||||||
|
* @param {boolean} [opts.dryRun=false]
|
||||||
|
* @param {string} [opts.backupDir] - Required if not dryRun
|
||||||
|
* @returns {Promise<{ applied: object[], failed: object[] }>}
|
||||||
|
*/
|
||||||
|
export async function applyFixes(fixPlans, opts = {}) {
|
||||||
|
const applied = [];
|
||||||
|
const failed = [];
|
||||||
|
|
||||||
|
if (!opts.dryRun && !opts.backupDir) {
|
||||||
|
throw new Error('backupDir is required when not in dryRun mode');
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const plan of fixPlans) {
|
||||||
|
if (opts.dryRun) {
|
||||||
|
applied.push({
|
||||||
|
findingId: plan.findingId,
|
||||||
|
file: plan.file,
|
||||||
|
status: 'dry-run',
|
||||||
|
type: plan.type,
|
||||||
|
description: plan.description,
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
await applyFix(plan);
|
||||||
|
applied.push({
|
||||||
|
findingId: plan.findingId,
|
||||||
|
file: plan.file,
|
||||||
|
status: 'applied',
|
||||||
|
type: plan.type,
|
||||||
|
description: plan.description,
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
failed.push({
|
||||||
|
findingId: plan.findingId,
|
||||||
|
file: plan.file,
|
||||||
|
status: 'failed',
|
||||||
|
error: err.message,
|
||||||
|
type: plan.type,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { applied, failed };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Apply a single fix.
|
||||||
|
* @param {object} plan
|
||||||
|
*/
|
||||||
|
async function applyFix(plan) {
|
||||||
|
switch (plan.type) {
|
||||||
|
case FIX_TYPES.JSON_KEY_ADD:
|
||||||
|
await applyJsonKeyAdd(plan);
|
||||||
|
break;
|
||||||
|
case FIX_TYPES.JSON_KEY_REMOVE:
|
||||||
|
await applyJsonKeyRemove(plan);
|
||||||
|
break;
|
||||||
|
case FIX_TYPES.JSON_KEY_TYPE_FIX:
|
||||||
|
await applyJsonKeyTypeFix(plan);
|
||||||
|
break;
|
||||||
|
case FIX_TYPES.JSON_RESTRUCTURE:
|
||||||
|
await applyJsonRestructure(plan);
|
||||||
|
break;
|
||||||
|
case FIX_TYPES.FRONTMATTER_RENAME:
|
||||||
|
await applyFrontmatterRename(plan);
|
||||||
|
break;
|
||||||
|
case FIX_TYPES.FILE_RENAME:
|
||||||
|
await applyFileRename(plan);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
throw new Error(`Unknown fix type: ${plan.type}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Add a key to a JSON file (as first key for $schema).
|
||||||
|
*/
|
||||||
|
async function applyJsonKeyAdd(plan) {
|
||||||
|
const content = await readFile(plan.file, 'utf-8');
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (parsed === null) throw new Error('Invalid JSON');
|
||||||
|
|
||||||
|
// For $schema, insert as first key
|
||||||
|
if (plan.key === '$schema') {
|
||||||
|
const newObj = { $schema: plan.value, ...parsed };
|
||||||
|
await writeJsonFile(plan.file, newObj);
|
||||||
|
} else {
|
||||||
|
parsed[plan.key] = plan.value;
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Remove a key from a JSON file.
|
||||||
|
*/
|
||||||
|
async function applyJsonKeyRemove(plan) {
|
||||||
|
const content = await readFile(plan.file, 'utf-8');
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (parsed === null) throw new Error('Invalid JSON');
|
||||||
|
|
||||||
|
delete parsed[plan.key];
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fix the type of a JSON key value.
|
||||||
|
*/
|
||||||
|
async function applyJsonKeyTypeFix(plan) {
|
||||||
|
const content = await readFile(plan.file, 'utf-8');
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (parsed === null) throw new Error('Invalid JSON');
|
||||||
|
|
||||||
|
// Handle nested hook timeout fixes
|
||||||
|
if (plan.key === 'timeout' && plan.event) {
|
||||||
|
fixTimeoutInHooks(parsed, plan.event);
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle effortLevel special case
|
||||||
|
if (plan.key === 'effortLevel' && plan.expectedType === 'effortLevel') {
|
||||||
|
parsed.effortLevel = findNearestEffortLevel(parsed.effortLevel);
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generic type conversion
|
||||||
|
if (parsed[plan.key] !== undefined) {
|
||||||
|
parsed[plan.key] = convertType(parsed[plan.key], plan.expectedType);
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Restructure JSON (hooks array→object, matcher object→string).
|
||||||
|
*/
|
||||||
|
async function applyJsonRestructure(plan) {
|
||||||
|
const content = await readFile(plan.file, 'utf-8');
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (parsed === null) throw new Error('Invalid JSON');
|
||||||
|
|
||||||
|
if (plan.restructureType === 'hooks-array-to-object') {
|
||||||
|
restructureHooksArrayToObject(parsed);
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (plan.restructureType === 'matcher-object-to-string') {
|
||||||
|
restructureMatcherObjectToString(parsed, plan.event);
|
||||||
|
await writeJsonFile(plan.file, parsed);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`Unknown restructure type: ${plan.restructureType}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Rename a frontmatter field in a markdown file.
|
||||||
|
*/
|
||||||
|
async function applyFrontmatterRename(plan) {
|
||||||
|
const content = await readFile(plan.file, 'utf-8');
|
||||||
|
|
||||||
|
// Replace the field name in the frontmatter section only
|
||||||
|
const fmMatch = content.match(/^(---\r?\n)([\s\S]*?)(\r?\n---)/);
|
||||||
|
if (!fmMatch) throw new Error('No frontmatter found');
|
||||||
|
|
||||||
|
const before = fmMatch[2];
|
||||||
|
const regex = new RegExp(`^(${plan.oldField})(\\s*:)`, 'gm');
|
||||||
|
const after = before.replace(regex, `${plan.newField}$2`);
|
||||||
|
|
||||||
|
if (before === after) throw new Error(`Field "${plan.oldField}" not found in frontmatter`);
|
||||||
|
|
||||||
|
const newContent = fmMatch[1] + after + fmMatch[3] + content.slice(fmMatch[0].length);
|
||||||
|
await writeFile(plan.file, newContent, 'utf-8');
|
||||||
|
|
||||||
|
// Validate frontmatter still parses
|
||||||
|
const { frontmatter } = parseFrontmatter(newContent);
|
||||||
|
if (!frontmatter) throw new Error('Frontmatter parse failed after rename');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Rename a file (change extension to .md).
|
||||||
|
*/
|
||||||
|
async function applyFileRename(plan) {
|
||||||
|
try {
|
||||||
|
await stat(plan.file);
|
||||||
|
} catch {
|
||||||
|
throw new Error(`Source file not found: ${plan.file}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check target doesn't already exist
|
||||||
|
try {
|
||||||
|
await stat(plan.newPath);
|
||||||
|
throw new Error(`Target already exists: ${plan.newPath}`);
|
||||||
|
} catch (err) {
|
||||||
|
if (err.message.startsWith('Target already exists')) throw err;
|
||||||
|
// File doesn't exist — good
|
||||||
|
}
|
||||||
|
|
||||||
|
await rename(plan.file, plan.newPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Helper functions ---
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Write a JSON object to a file with 2-space indent.
|
||||||
|
*/
|
||||||
|
async function writeJsonFile(filePath, obj) {
|
||||||
|
const json = JSON.stringify(obj, null, 2) + '\n';
|
||||||
|
// Validate the JSON we're about to write
|
||||||
|
const reparsed = parseJson(json);
|
||||||
|
if (reparsed === null) throw new Error('Generated invalid JSON');
|
||||||
|
await writeFile(filePath, json, 'utf-8');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Convert a value to the expected type.
|
||||||
|
*/
|
||||||
|
function convertType(value, expectedType) {
|
||||||
|
switch (expectedType) {
|
||||||
|
case 'boolean':
|
||||||
|
if (typeof value === 'string') {
|
||||||
|
if (value.toLowerCase() === 'true' || value === '1') return true;
|
||||||
|
if (value.toLowerCase() === 'false' || value === '0') return false;
|
||||||
|
}
|
||||||
|
if (typeof value === 'number') return value !== 0;
|
||||||
|
return Boolean(value);
|
||||||
|
case 'number':
|
||||||
|
if (typeof value === 'string') {
|
||||||
|
const num = Number(value);
|
||||||
|
return isNaN(num) ? 10000 : num; // default 10000 for timeouts
|
||||||
|
}
|
||||||
|
return Number(value);
|
||||||
|
case 'string':
|
||||||
|
return String(value);
|
||||||
|
default:
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find the nearest valid effortLevel.
|
||||||
|
*/
|
||||||
|
function findNearestEffortLevel(value) {
|
||||||
|
if (typeof value !== 'string') return 'medium';
|
||||||
|
const lower = value.toLowerCase();
|
||||||
|
// Simple distance-based matching
|
||||||
|
let best = 'medium';
|
||||||
|
let bestDist = Infinity;
|
||||||
|
for (const level of VALID_EFFORT_LEVELS) {
|
||||||
|
const dist = levenshtein(lower, level);
|
||||||
|
if (dist < bestDist) {
|
||||||
|
bestDist = dist;
|
||||||
|
best = level;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return best;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Simple Levenshtein distance.
|
||||||
|
*/
|
||||||
|
function levenshtein(a, b) {
|
||||||
|
const m = a.length, n = b.length;
|
||||||
|
const dp = Array.from({ length: m + 1 }, () => new Array(n + 1).fill(0));
|
||||||
|
for (let i = 0; i <= m; i++) dp[i][0] = i;
|
||||||
|
for (let j = 0; j <= n; j++) dp[0][j] = j;
|
||||||
|
for (let i = 1; i <= m; i++) {
|
||||||
|
for (let j = 1; j <= n; j++) {
|
||||||
|
dp[i][j] = Math.min(
|
||||||
|
dp[i - 1][j] + 1,
|
||||||
|
dp[i][j - 1] + 1,
|
||||||
|
dp[i - 1][j - 1] + (a[i - 1] !== b[j - 1] ? 1 : 0),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return dp[m][n];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Convert hooks array to object format.
|
||||||
|
* Best-effort: wraps array items under "PreToolUse" if they have event fields,
|
||||||
|
* otherwise groups by event property.
|
||||||
|
*/
|
||||||
|
function restructureHooksArrayToObject(parsed) {
|
||||||
|
if (!Array.isArray(parsed.hooks)) return;
|
||||||
|
|
||||||
|
const hooksObj = {};
|
||||||
|
for (const item of parsed.hooks) {
|
||||||
|
const event = item.event || 'PreToolUse';
|
||||||
|
if (!hooksObj[event]) hooksObj[event] = [];
|
||||||
|
|
||||||
|
// Build the handler group
|
||||||
|
const group = {};
|
||||||
|
if (item.matcher) group.matcher = typeof item.matcher === 'string' ? item.matcher : String(item.matcher);
|
||||||
|
group.hooks = [];
|
||||||
|
|
||||||
|
if (item.command) {
|
||||||
|
group.hooks.push({
|
||||||
|
type: item.type || 'command',
|
||||||
|
command: item.command,
|
||||||
|
...(item.timeout !== undefined ? { timeout: typeof item.timeout === 'number' ? item.timeout : Number(item.timeout) || 10000 } : {}),
|
||||||
|
});
|
||||||
|
} else if (item.hooks && Array.isArray(item.hooks)) {
|
||||||
|
group.hooks = item.hooks;
|
||||||
|
}
|
||||||
|
|
||||||
|
hooksObj[event].push(group);
|
||||||
|
}
|
||||||
|
|
||||||
|
parsed.hooks = hooksObj;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Convert matcher from object to string in hooks config.
|
||||||
|
*/
|
||||||
|
function restructureMatcherObjectToString(parsed, event) {
|
||||||
|
const hooks = parsed.hooks || parsed;
|
||||||
|
if (typeof hooks !== 'object' || Array.isArray(hooks)) return;
|
||||||
|
|
||||||
|
for (const [eventKey, handlers] of Object.entries(hooks)) {
|
||||||
|
if (event && eventKey !== event) continue;
|
||||||
|
if (!Array.isArray(handlers)) continue;
|
||||||
|
|
||||||
|
for (const group of handlers) {
|
||||||
|
if (group.matcher && typeof group.matcher === 'object') {
|
||||||
|
// Extract tool name from object — common patterns: { tool: "Bash" }, { name: "Bash" }
|
||||||
|
const tool = group.matcher.tool || group.matcher.name || group.matcher.type || Object.values(group.matcher)[0];
|
||||||
|
group.matcher = typeof tool === 'string' ? tool : 'Bash';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fix timeout type in nested hooks config.
|
||||||
|
*/
|
||||||
|
function fixTimeoutInHooks(parsed, event) {
|
||||||
|
const hooks = parsed.hooks || parsed;
|
||||||
|
if (typeof hooks !== 'object' || Array.isArray(hooks)) return;
|
||||||
|
|
||||||
|
for (const [eventKey, handlers] of Object.entries(hooks)) {
|
||||||
|
if (event && eventKey !== event) continue;
|
||||||
|
if (!Array.isArray(handlers)) continue;
|
||||||
|
|
||||||
|
for (const group of handlers) {
|
||||||
|
if (!group.hooks || !Array.isArray(group.hooks)) continue;
|
||||||
|
for (const hook of group.hooks) {
|
||||||
|
if (hook.timeout !== undefined && typeof hook.timeout !== 'number') {
|
||||||
|
const num = Number(hook.timeout);
|
||||||
|
hook.timeout = isNaN(num) ? 10000 : num;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract key name from evidence string like: 'someKey: "value"'
|
||||||
|
*/
|
||||||
|
function extractKeyFromEvidence(evidence) {
|
||||||
|
if (!evidence) return null;
|
||||||
|
const match = evidence.match(/^(\w+)\s*:/);
|
||||||
|
return match ? match[1] : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract expected type from description string like: 'should be boolean, got string'
|
||||||
|
*/
|
||||||
|
function extractExpectedType(description) {
|
||||||
|
const match = description.match(/should be (\w+)/);
|
||||||
|
return match ? match[1] : 'string';
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract event name from description like: '"PreToolUse" has a matcher...'
|
||||||
|
*/
|
||||||
|
function extractEventFromDescription(description) {
|
||||||
|
const match = description.match(/"(\w+)"/);
|
||||||
|
return match ? match[1] : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Verify fixes by re-running affected scanners.
|
||||||
|
* @param {object} originalEnvelope - Original scanner envelope
|
||||||
|
* @param {object[]} appliedResults - Results from applyFixes()
|
||||||
|
* @returns {Promise<{ verified: string[], regressions: string[], newFindings: object[] }>}
|
||||||
|
*/
|
||||||
|
export async function verifyFixes(originalEnvelope, appliedResults) {
|
||||||
|
const targetPath = originalEnvelope.meta.target;
|
||||||
|
const verified = [];
|
||||||
|
const regressions = [];
|
||||||
|
const newFindings = [];
|
||||||
|
|
||||||
|
// Re-scan the target
|
||||||
|
const newEnvelope = await runAllScanners(targetPath, { includeGlobal: false });
|
||||||
|
|
||||||
|
// Build set of original finding IDs that were fixed
|
||||||
|
const fixedIds = new Set(
|
||||||
|
appliedResults.filter(r => r.status === 'applied').map(r => r.findingId),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Build set of new finding titles for comparison
|
||||||
|
const newFindingMap = new Map();
|
||||||
|
for (const scanner of newEnvelope.scanners) {
|
||||||
|
for (const f of scanner.findings) {
|
||||||
|
newFindingMap.set(`${f.scanner}:${f.title}:${f.file}`, f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that fixed findings are gone
|
||||||
|
for (const scanner of originalEnvelope.scanners) {
|
||||||
|
for (const f of scanner.findings) {
|
||||||
|
if (!fixedIds.has(f.id)) continue;
|
||||||
|
|
||||||
|
const key = `${f.scanner}:${f.title}:${f.file}`;
|
||||||
|
// For file-rename fixes, the original file path won't exist anymore
|
||||||
|
const fixResult = appliedResults.find(r => r.findingId === f.id);
|
||||||
|
if (fixResult && fixResult.type === 'file-rename') {
|
||||||
|
// Check that the finding doesn't reappear at the new path
|
||||||
|
verified.push(f.id);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (newFindingMap.has(key)) {
|
||||||
|
regressions.push(f.id);
|
||||||
|
} else {
|
||||||
|
verified.push(f.id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for any completely new findings not in original
|
||||||
|
const originalKeys = new Set();
|
||||||
|
for (const scanner of originalEnvelope.scanners) {
|
||||||
|
for (const f of scanner.findings) {
|
||||||
|
originalKeys.add(`${f.scanner}:${f.title}:${f.file}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [key, f] of newFindingMap) {
|
||||||
|
if (!originalKeys.has(key)) {
|
||||||
|
newFindings.push(f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { verified, regressions, newFindings };
|
||||||
|
}
|
||||||
|
|
||||||
|
export { FIX_TYPES };
|
||||||
270
plugins/config-audit/scanners/hook-validator.mjs
Normal file
270
plugins/config-audit/scanners/hook-validator.mjs
Normal file
|
|
@ -0,0 +1,270 @@
|
||||||
|
/**
|
||||||
|
* HKV Scanner — Hook Validator
|
||||||
|
* Validates hooks.json format, script existence, event validity, timeouts.
|
||||||
|
* Finding IDs: CA-HKV-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readTextFile, discoverConfigFiles } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseJson } from './lib/yaml-parser.mjs';
|
||||||
|
import { stat } from 'node:fs/promises';
|
||||||
|
import { resolve, dirname } from 'node:path';
|
||||||
|
|
||||||
|
const SCANNER = 'HKV';
|
||||||
|
|
||||||
|
/** All valid hook events (as of April 2026) */
|
||||||
|
const VALID_EVENTS = new Set([
|
||||||
|
'SessionStart', 'InstructionsLoaded', 'UserPromptSubmit',
|
||||||
|
'PreToolUse', 'PermissionRequest', 'PermissionDenied',
|
||||||
|
'PostToolUse', 'PostToolUseFailure',
|
||||||
|
'SubagentStart', 'SubagentStop',
|
||||||
|
'TaskCreated', 'TaskCompleted',
|
||||||
|
'Stop', 'StopFailure',
|
||||||
|
'TeammateIdle', 'Notification',
|
||||||
|
'ConfigChange', 'CwdChanged', 'FileChanged',
|
||||||
|
'WorktreeCreate', 'WorktreeRemove',
|
||||||
|
'PreCompact', 'PostCompact',
|
||||||
|
'Elicitation', 'ElicitationResult',
|
||||||
|
'SessionEnd',
|
||||||
|
]);
|
||||||
|
|
||||||
|
/** Valid hook handler types */
|
||||||
|
const VALID_TYPES = new Set(['command', 'http', 'prompt', 'agent']);
|
||||||
|
|
||||||
|
/** Reasonable timeout range */
|
||||||
|
const MIN_TIMEOUT = 1000;
|
||||||
|
const MAX_TIMEOUT = 300000; // 5 minutes
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan all hooks.json files and hook configs in settings.json.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const hooksFiles = discovery.files.filter(f => f.type === 'hooks-json');
|
||||||
|
const settingsFiles = discovery.files.filter(f => f.type === 'settings-json');
|
||||||
|
const findings = [];
|
||||||
|
let filesScanned = 0;
|
||||||
|
|
||||||
|
// Scan standalone hooks.json files
|
||||||
|
for (const file of hooksFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
filesScanned++;
|
||||||
|
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (parsed === null) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Invalid JSON in hooks.json',
|
||||||
|
description: `${file.relPath} contains invalid JSON. All hooks in this file will be ignored.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Fix JSON syntax errors.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const hooksConfig = parsed.hooks || parsed;
|
||||||
|
await validateHooksObject(hooksConfig, file, findings, dirname(file.absPath));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan hooks in settings.json files
|
||||||
|
for (const file of settingsFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (!parsed || !parsed.hooks) continue;
|
||||||
|
filesScanned++;
|
||||||
|
|
||||||
|
if (Array.isArray(parsed.hooks)) {
|
||||||
|
// Already reported by settings-validator, skip here
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
await validateHooksObject(parsed.hooks, file, findings, dirname(file.absPath));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (hooksFiles.length === 0 && !settingsFiles.some(async f => {
|
||||||
|
const c = await readTextFile(f.absPath);
|
||||||
|
const p = c ? parseJson(c) : null;
|
||||||
|
return p && p.hooks;
|
||||||
|
})) {
|
||||||
|
// No hooks at all — this is noted but not an error
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate a hooks object (event key → handler array).
|
||||||
|
*/
|
||||||
|
async function validateHooksObject(hooks, file, findings, baseDir) {
|
||||||
|
if (typeof hooks !== 'object' || Array.isArray(hooks)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Hooks must be an object with event keys',
|
||||||
|
description: `${file.relPath}: hooks is ${Array.isArray(hooks) ? 'an array' : typeof hooks}. Expected object with event names as keys.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Use format: { "PreToolUse": [...], "Stop": [...] }',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [event, handlers] of Object.entries(hooks)) {
|
||||||
|
// Validate event name
|
||||||
|
if (!VALID_EVENTS.has(event)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Unknown hook event',
|
||||||
|
description: `${file.relPath}: "${event}" is not a valid hook event. This hook will never fire.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: event,
|
||||||
|
recommendation: `Valid events: ${[...VALID_EVENTS].slice(0, 8).join(', ')}... (26 total)`,
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!Array.isArray(handlers)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Hook handlers must be an array',
|
||||||
|
description: `${file.relPath}: handlers for "${event}" is not an array.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `"${event}": ${typeof handlers}`,
|
||||||
|
recommendation: `Use format: "${event}": [{ "hooks": [...] }]`,
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const handlerGroup of handlers) {
|
||||||
|
// Validate matcher format
|
||||||
|
if (handlerGroup.matcher !== undefined) {
|
||||||
|
if (typeof handlerGroup.matcher === 'object') {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Matcher must be a string, not an object',
|
||||||
|
description: `${file.relPath}: "${event}" has a matcher that is an object. Matcher should be a simple string like "Bash" or "Edit|Write".`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: JSON.stringify(handlerGroup.matcher),
|
||||||
|
recommendation: 'Change matcher to a string: "matcher": "Bash"',
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!handlerGroup.hooks || !Array.isArray(handlerGroup.hooks)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Missing hooks array in handler group',
|
||||||
|
description: `${file.relPath}: "${event}" handler group is missing the "hooks" array.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add "hooks": [{ "type": "command", "command": "..." }]',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const hook of handlerGroup.hooks) {
|
||||||
|
// Validate handler type
|
||||||
|
if (!hook.type || !VALID_TYPES.has(hook.type)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Invalid hook handler type',
|
||||||
|
description: `${file.relPath}: "${event}" has handler with type "${hook.type || '(missing)'}".`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `type: "${hook.type || ''}"`,
|
||||||
|
recommendation: `Valid types: ${[...VALID_TYPES].join(', ')}`,
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// For command hooks, check script existence
|
||||||
|
if (hook.type === 'command' && hook.command) {
|
||||||
|
const scriptPath = extractScriptPath(hook.command, baseDir);
|
||||||
|
if (scriptPath) {
|
||||||
|
try {
|
||||||
|
await stat(scriptPath);
|
||||||
|
} catch {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Hook script not found',
|
||||||
|
description: `${file.relPath}: "${event}" references script that does not exist.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: hook.command,
|
||||||
|
recommendation: `Create the script at: ${scriptPath}`,
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Timeout validation
|
||||||
|
if (hook.timeout !== undefined) {
|
||||||
|
if (typeof hook.timeout !== 'number') {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Hook timeout must be a number',
|
||||||
|
description: `${file.relPath}: "${event}" has non-numeric timeout.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `timeout: ${JSON.stringify(hook.timeout)}`,
|
||||||
|
recommendation: 'Set timeout to a number (milliseconds).',
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
} else if (hook.timeout < MIN_TIMEOUT || hook.timeout > MAX_TIMEOUT) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Hook timeout outside recommended range',
|
||||||
|
description: `${file.relPath}: "${event}" timeout is ${hook.timeout}ms. Recommended range: ${MIN_TIMEOUT}-${MAX_TIMEOUT}ms.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `timeout: ${hook.timeout}`,
|
||||||
|
recommendation: `Set timeout between ${MIN_TIMEOUT} and ${MAX_TIMEOUT}ms.`,
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract a filesystem path from a hook command string.
|
||||||
|
* Handles ${CLAUDE_PLUGIN_ROOT} variable substitution.
|
||||||
|
*/
|
||||||
|
function extractScriptPath(command, baseDir) {
|
||||||
|
// Extract the script path from common patterns:
|
||||||
|
// "bash /path/to/script.sh"
|
||||||
|
// "node ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/foo.mjs"
|
||||||
|
const match = command.match(/(?:bash|node|sh)\s+(.+?)(?:\s|$)/);
|
||||||
|
if (!match) return null;
|
||||||
|
|
||||||
|
let scriptPath = match[1].trim();
|
||||||
|
|
||||||
|
// Replace ${CLAUDE_PLUGIN_ROOT} with baseDir (best guess)
|
||||||
|
scriptPath = scriptPath.replace(/\$\{CLAUDE_PLUGIN_ROOT\}/g, resolve(baseDir, '..'));
|
||||||
|
scriptPath = scriptPath.replace(/\$CLAUDE_PLUGIN_ROOT/g, resolve(baseDir, '..'));
|
||||||
|
|
||||||
|
// Don't validate absolute paths that use env vars we can't resolve
|
||||||
|
if (scriptPath.includes('$')) return null;
|
||||||
|
|
||||||
|
return resolve(baseDir, scriptPath);
|
||||||
|
}
|
||||||
185
plugins/config-audit/scanners/import-resolver.mjs
Normal file
185
plugins/config-audit/scanners/import-resolver.mjs
Normal file
|
|
@ -0,0 +1,185 @@
|
||||||
|
/**
|
||||||
|
* IMP Scanner — Import Resolver
|
||||||
|
* Resolves @import references in CLAUDE.md files: broken links, circular refs, deep chains.
|
||||||
|
* Finding IDs: CA-IMP-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve, dirname, basename } from 'node:path';
|
||||||
|
import { tmpdir } from 'node:os';
|
||||||
|
import { stat } from 'node:fs/promises';
|
||||||
|
import { readTextFile } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { findImports } from './lib/yaml-parser.mjs';
|
||||||
|
import { truncate } from './lib/string-utils.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'IMP';
|
||||||
|
const MAX_CHAIN_DEPTH = 5;
|
||||||
|
const HARD_LIMIT = 20;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a file exists.
|
||||||
|
* @param {string} absPath
|
||||||
|
* @returns {Promise<boolean>}
|
||||||
|
*/
|
||||||
|
async function fileExists(absPath) {
|
||||||
|
try {
|
||||||
|
await stat(absPath);
|
||||||
|
return true;
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Resolve an import path relative to the containing file.
|
||||||
|
* @param {string} importPath
|
||||||
|
* @param {string} containingFile
|
||||||
|
* @returns {{ resolved: string, hasTilde: boolean }}
|
||||||
|
*/
|
||||||
|
function resolveImportPath(importPath, containingFile) {
|
||||||
|
const hasTilde = importPath.startsWith('~');
|
||||||
|
let resolved;
|
||||||
|
|
||||||
|
if (hasTilde) {
|
||||||
|
const home = process.env.HOME || process.env.USERPROFILE || tmpdir();
|
||||||
|
resolved = resolve(importPath.replace(/^~/, home));
|
||||||
|
} else if (importPath.startsWith('/')) {
|
||||||
|
resolved = importPath;
|
||||||
|
} else {
|
||||||
|
resolved = resolve(dirname(containingFile), importPath);
|
||||||
|
}
|
||||||
|
|
||||||
|
return { resolved, hasTilde };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Walk imports recursively from a starting file via DFS.
|
||||||
|
* @param {string} file - Absolute path to current file
|
||||||
|
* @param {string[]} chain - Current chain of files (for cycle detection)
|
||||||
|
* @param {Set<string>} reported - Set of "from::to" pairs already reported
|
||||||
|
* @param {object[]} findings - Accumulator for findings
|
||||||
|
*/
|
||||||
|
async function walkImports(file, chain, reported, findings) {
|
||||||
|
const content = await readTextFile(file);
|
||||||
|
if (!content) return;
|
||||||
|
|
||||||
|
const imports = findImports(content);
|
||||||
|
for (const imp of imports) {
|
||||||
|
const { resolved, hasTilde } = resolveImportPath(imp.path, file);
|
||||||
|
const reportKey = `${file}::${resolved}`;
|
||||||
|
|
||||||
|
// Tilde path warning
|
||||||
|
if (hasTilde && !reported.has(`tilde::${resolved}`)) {
|
||||||
|
reported.add(`tilde::${resolved}`);
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Tilde path in @import',
|
||||||
|
description: `@${imp.path} uses ~ which may not expand correctly in all contexts.`,
|
||||||
|
file,
|
||||||
|
line: imp.line,
|
||||||
|
evidence: `@${imp.path}`,
|
||||||
|
recommendation: 'Use a relative path or absolute path without tilde expansion.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check file existence
|
||||||
|
const exists = await fileExists(resolved);
|
||||||
|
if (!exists) {
|
||||||
|
if (!reported.has(reportKey)) {
|
||||||
|
reported.add(reportKey);
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Broken @import link',
|
||||||
|
description: `@${imp.path} references a file that does not exist.`,
|
||||||
|
file,
|
||||||
|
line: imp.line,
|
||||||
|
evidence: `@${imp.path} → ${truncate(resolved, 80)}`,
|
||||||
|
recommendation: 'Fix the path or create the missing file.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Circular reference detection
|
||||||
|
if (chain.includes(resolved)) {
|
||||||
|
if (!reported.has(reportKey)) {
|
||||||
|
reported.add(reportKey);
|
||||||
|
const cycleStart = chain.indexOf(resolved);
|
||||||
|
const cycle = chain.slice(cycleStart).map(f => basename(f)).join(' → ');
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Circular @import reference',
|
||||||
|
description: `@${imp.path} creates a circular import chain.`,
|
||||||
|
file,
|
||||||
|
line: imp.line,
|
||||||
|
evidence: `${cycle} → ${basename(resolved)}`,
|
||||||
|
recommendation: 'Break the circular dependency by removing one of the @imports.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Deep chain warning
|
||||||
|
if (chain.length >= MAX_CHAIN_DEPTH) {
|
||||||
|
if (!reported.has(`deep::${resolved}`)) {
|
||||||
|
reported.add(`deep::${resolved}`);
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Deep @import chain',
|
||||||
|
description: `@${imp.path} is at depth ${chain.length} (>${MAX_CHAIN_DEPTH} hops).`,
|
||||||
|
file,
|
||||||
|
line: imp.line,
|
||||||
|
evidence: `Chain depth: ${chain.length}`,
|
||||||
|
recommendation: 'Flatten the import hierarchy to reduce nesting.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hard limit safety bail
|
||||||
|
if (chain.length >= HARD_LIMIT) continue;
|
||||||
|
|
||||||
|
// Recurse
|
||||||
|
await walkImports(resolved, [...chain, resolved], reported, findings);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan all CLAUDE.md files for @import issues.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const claudeMdFiles = discovery.files.filter(f => f.type === 'claude-md');
|
||||||
|
const findings = [];
|
||||||
|
let filesScanned = 0;
|
||||||
|
|
||||||
|
if (claudeMdFiles.length === 0) {
|
||||||
|
return scannerResult(SCANNER, 'skipped', [], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
const reported = new Set();
|
||||||
|
|
||||||
|
for (const file of claudeMdFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
|
||||||
|
const imports = findImports(content);
|
||||||
|
if (imports.length === 0) {
|
||||||
|
filesScanned++;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
filesScanned++;
|
||||||
|
await walkImports(file.absPath, [file.absPath], reported, findings);
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
179
plugins/config-audit/scanners/lib/backup.mjs
Normal file
179
plugins/config-audit/scanners/lib/backup.mjs
Normal file
|
|
@ -0,0 +1,179 @@
|
||||||
|
/**
|
||||||
|
* Backup library for config-audit.
|
||||||
|
* Creates timestamped backups of config files with checksums and manifests.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readFileSync, writeFileSync, copyFileSync, mkdirSync, readdirSync, existsSync, statSync, rmSync, readFile } from 'node:fs';
|
||||||
|
import { readFile as readFileAsync } from 'node:fs/promises';
|
||||||
|
import { join, basename } from 'node:path';
|
||||||
|
import { createHash } from 'node:crypto';
|
||||||
|
import { homedir } from 'node:os';
|
||||||
|
|
||||||
|
const BACKUP_ROOT = join(homedir(), '.config-audit', 'backups');
|
||||||
|
const MAX_BACKUPS = 10;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the backup root directory path.
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function getBackupDir() {
|
||||||
|
return BACKUP_ROOT;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a timestamp-based backup ID.
|
||||||
|
* @returns {string} Format: YYYYMMDD_HHMMSS
|
||||||
|
*/
|
||||||
|
export function generateBackupId() {
|
||||||
|
const now = new Date();
|
||||||
|
const y = now.getFullYear();
|
||||||
|
const m = String(now.getMonth() + 1).padStart(2, '0');
|
||||||
|
const d = String(now.getDate()).padStart(2, '0');
|
||||||
|
const h = String(now.getHours()).padStart(2, '0');
|
||||||
|
const min = String(now.getMinutes()).padStart(2, '0');
|
||||||
|
const s = String(now.getSeconds()).padStart(2, '0');
|
||||||
|
return `${y}${m}${d}_${h}${min}${s}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a safe filename from a file path (replace path separators with _).
|
||||||
|
* @param {string} filePath
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function safeFileName(filePath) {
|
||||||
|
return filePath.replace(/[\\\/]/g, '_');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate SHA-256 checksum of a buffer or string.
|
||||||
|
* @param {Buffer|string} content
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function checksum(content) {
|
||||||
|
return createHash('sha256').update(content).digest('hex');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a backup of the specified files.
|
||||||
|
* @param {string[]} files - Array of absolute file paths to back up
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {string} [opts.backupId] - Override backup ID (for testing)
|
||||||
|
* @returns {{ backupId: string, backupPath: string, manifest: object }}
|
||||||
|
*/
|
||||||
|
export function createBackup(files, opts = {}) {
|
||||||
|
const backupId = opts.backupId || generateBackupId();
|
||||||
|
const backupPath = join(BACKUP_ROOT, backupId);
|
||||||
|
const filesDir = join(backupPath, 'files');
|
||||||
|
|
||||||
|
mkdirSync(filesDir, { recursive: true });
|
||||||
|
|
||||||
|
const manifestFiles = [];
|
||||||
|
|
||||||
|
for (const file of files) {
|
||||||
|
if (!existsSync(file)) continue;
|
||||||
|
|
||||||
|
const safeName = safeFileName(file);
|
||||||
|
copyFileSync(file, join(filesDir, safeName));
|
||||||
|
|
||||||
|
const content = readFileSync(file);
|
||||||
|
const hash = checksum(content);
|
||||||
|
const sizeBytes = statSync(file).size;
|
||||||
|
|
||||||
|
manifestFiles.push({
|
||||||
|
originalPath: file,
|
||||||
|
backupPath: `./files/${safeName}`,
|
||||||
|
checksum: hash,
|
||||||
|
sizeBytes,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const manifest = {
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
backup_id: backupId,
|
||||||
|
files: manifestFiles,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Write manifest as YAML-like format
|
||||||
|
const manifestYaml = serializeManifest(manifest);
|
||||||
|
writeFileSync(join(backupPath, 'manifest.yaml'), manifestYaml);
|
||||||
|
|
||||||
|
// Cleanup old backups
|
||||||
|
cleanupOldBackups();
|
||||||
|
|
||||||
|
return { backupId, backupPath, manifest };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Serialize manifest to YAML-like format.
|
||||||
|
* @param {object} manifest
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
function serializeManifest(manifest) {
|
||||||
|
let yaml = `created_at: "${manifest.created_at}"\n`;
|
||||||
|
yaml += `backup_id: "${manifest.backup_id}"\n`;
|
||||||
|
yaml += `files:\n`;
|
||||||
|
for (const f of manifest.files) {
|
||||||
|
yaml += ` - original_path: "${f.originalPath}"\n`;
|
||||||
|
yaml += ` backup_path: "${f.backupPath}"\n`;
|
||||||
|
yaml += ` checksum: "${f.checksum}"\n`;
|
||||||
|
yaml += ` size_bytes: ${f.sizeBytes}\n`;
|
||||||
|
}
|
||||||
|
return yaml;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse a manifest.yaml file content.
|
||||||
|
* @param {string} content
|
||||||
|
* @returns {object}
|
||||||
|
*/
|
||||||
|
export function parseManifest(content) {
|
||||||
|
const result = { created_at: '', backup_id: '', files: [] };
|
||||||
|
|
||||||
|
const createdMatch = content.match(/created_at:\s*"([^"]+)"/);
|
||||||
|
if (createdMatch) result.created_at = createdMatch[1];
|
||||||
|
|
||||||
|
const idMatch = content.match(/backup_id:\s*"([^"]+)"/);
|
||||||
|
if (idMatch) result.backup_id = idMatch[1];
|
||||||
|
|
||||||
|
// Parse file entries
|
||||||
|
const fileBlocks = content.split(/\n\s+-\s+original_path:/).slice(1);
|
||||||
|
for (const block of fileBlocks) {
|
||||||
|
const origMatch = block.match(/^\s*"([^"]+)"/);
|
||||||
|
const bpMatch = block.match(/backup_path:\s*"([^"]+)"/);
|
||||||
|
const csMatch = block.match(/checksum:\s*"([^"]+)"/);
|
||||||
|
const szMatch = block.match(/size_bytes:\s*(\d+)/);
|
||||||
|
|
||||||
|
if (origMatch && bpMatch && csMatch) {
|
||||||
|
result.files.push({
|
||||||
|
originalPath: origMatch[1],
|
||||||
|
backupPath: bpMatch[1],
|
||||||
|
checksum: csMatch[1],
|
||||||
|
sizeBytes: szMatch ? parseInt(szMatch[1], 10) : 0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Remove old backups beyond MAX_BACKUPS.
|
||||||
|
*/
|
||||||
|
function cleanupOldBackups() {
|
||||||
|
if (!existsSync(BACKUP_ROOT)) return;
|
||||||
|
|
||||||
|
const dirs = readdirSync(BACKUP_ROOT, { withFileTypes: true })
|
||||||
|
.filter(d => d.isDirectory())
|
||||||
|
.map(d => d.name)
|
||||||
|
.sort();
|
||||||
|
|
||||||
|
if (dirs.length > MAX_BACKUPS) {
|
||||||
|
const toDelete = dirs.slice(0, dirs.length - MAX_BACKUPS);
|
||||||
|
for (const dir of toDelete) {
|
||||||
|
rmSync(join(BACKUP_ROOT, dir), { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export { MAX_BACKUPS };
|
||||||
124
plugins/config-audit/scanners/lib/baseline.mjs
Normal file
124
plugins/config-audit/scanners/lib/baseline.mjs
Normal file
|
|
@ -0,0 +1,124 @@
|
||||||
|
/**
|
||||||
|
* Baseline manager for config-audit.
|
||||||
|
* Stores and retrieves scanner envelopes as named baselines.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readFile, writeFile, readdir, unlink, mkdir, stat } from 'node:fs/promises';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
import { homedir } from 'node:os';
|
||||||
|
|
||||||
|
const BASELINES_DIR = join(homedir(), '.config-audit', 'baselines');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get the baselines directory path.
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function getBaselinesDir() {
|
||||||
|
return BASELINES_DIR;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Save a scanner envelope as a named baseline.
|
||||||
|
* @param {object} envelope - Full envelope from scan-orchestrator
|
||||||
|
* @param {string} [name='default'] - Baseline name
|
||||||
|
* @returns {Promise<{ path: string, name: string }>}
|
||||||
|
*/
|
||||||
|
export async function saveBaseline(envelope, name = 'default') {
|
||||||
|
await mkdir(BASELINES_DIR, { recursive: true });
|
||||||
|
|
||||||
|
const enriched = {
|
||||||
|
...envelope,
|
||||||
|
_baseline: {
|
||||||
|
saved_at: new Date().toISOString(),
|
||||||
|
target_path: envelope.meta?.target || '',
|
||||||
|
finding_count: envelope.aggregate?.total_findings || 0,
|
||||||
|
score: avgScore(envelope),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const filePath = join(BASELINES_DIR, `${name}.json`);
|
||||||
|
await writeFile(filePath, JSON.stringify(enriched, null, 2), 'utf-8');
|
||||||
|
|
||||||
|
return { path: filePath, name };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load a named baseline.
|
||||||
|
* @param {string} [name='default'] - Baseline name
|
||||||
|
* @returns {Promise<object|null>} Envelope or null if not found
|
||||||
|
*/
|
||||||
|
export async function loadBaseline(name = 'default') {
|
||||||
|
const filePath = join(BASELINES_DIR, `${name}.json`);
|
||||||
|
try {
|
||||||
|
const content = await readFile(filePath, 'utf-8');
|
||||||
|
return JSON.parse(content);
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all saved baselines.
|
||||||
|
* @returns {Promise<{ baselines: Array<{ name: string, savedAt: string, targetPath: string, findingCount: number, score: number }> }>}
|
||||||
|
*/
|
||||||
|
export async function listBaselines() {
|
||||||
|
try {
|
||||||
|
await stat(BASELINES_DIR);
|
||||||
|
} catch {
|
||||||
|
return { baselines: [] };
|
||||||
|
}
|
||||||
|
|
||||||
|
const entries = await readdir(BASELINES_DIR);
|
||||||
|
const baselines = [];
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!entry.endsWith('.json')) continue;
|
||||||
|
const name = entry.replace(/\.json$/, '');
|
||||||
|
const filePath = join(BASELINES_DIR, entry);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = await readFile(filePath, 'utf-8');
|
||||||
|
const data = JSON.parse(content);
|
||||||
|
const meta = data._baseline || {};
|
||||||
|
baselines.push({
|
||||||
|
name,
|
||||||
|
savedAt: meta.saved_at || '',
|
||||||
|
targetPath: meta.target_path || '',
|
||||||
|
findingCount: meta.finding_count || 0,
|
||||||
|
score: meta.score || 0,
|
||||||
|
});
|
||||||
|
} catch {
|
||||||
|
// Skip corrupt baselines
|
||||||
|
baselines.push({ name, savedAt: '', targetPath: '', findingCount: 0, score: 0 });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { baselines };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Delete a named baseline.
|
||||||
|
* @param {string} name - Baseline name
|
||||||
|
* @returns {Promise<{ deleted: boolean }>}
|
||||||
|
*/
|
||||||
|
export async function deleteBaseline(name) {
|
||||||
|
const filePath = join(BASELINES_DIR, `${name}.json`);
|
||||||
|
try {
|
||||||
|
await unlink(filePath);
|
||||||
|
return { deleted: true };
|
||||||
|
} catch {
|
||||||
|
return { deleted: false };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Internal helpers ---
|
||||||
|
|
||||||
|
function avgScore(envelope) {
|
||||||
|
const scanners = envelope.scanners || [];
|
||||||
|
if (scanners.length === 0) return 0;
|
||||||
|
// Simple: count findings as proxy for score
|
||||||
|
const total = envelope.aggregate?.total_findings || 0;
|
||||||
|
// Lower findings = higher score. Cap at 100.
|
||||||
|
return Math.max(0, 100 - total * 3);
|
||||||
|
}
|
||||||
287
plugins/config-audit/scanners/lib/diff-engine.mjs
Normal file
287
plugins/config-audit/scanners/lib/diff-engine.mjs
Normal file
|
|
@ -0,0 +1,287 @@
|
||||||
|
/**
|
||||||
|
* Diff engine for config-audit.
|
||||||
|
* Compares two scanner envelopes (baseline vs current) to detect drift.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { scoreByArea } from './scoring.mjs';
|
||||||
|
import { gradeFromPassRate } from './severity.mjs';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Diff two scanner envelopes.
|
||||||
|
* @param {object} baseline - Full envelope from scan-orchestrator
|
||||||
|
* @param {object} current - Full envelope from scan-orchestrator
|
||||||
|
* @returns {object} Diff result with new, resolved, unchanged, moved findings + score changes
|
||||||
|
*/
|
||||||
|
export function diffEnvelopes(baseline, current) {
|
||||||
|
const baseFindings = extractFindings(baseline);
|
||||||
|
const currFindings = extractFindings(current);
|
||||||
|
|
||||||
|
// Build lookup maps keyed by scanner+title+file
|
||||||
|
const baseByKey = groupByKey(baseFindings);
|
||||||
|
const currByKey = groupByKey(currFindings);
|
||||||
|
|
||||||
|
// Also build maps by scanner+title (ignoring file) for moved detection
|
||||||
|
const baseByScannerTitle = groupByScannerTitle(baseFindings);
|
||||||
|
const currByScannerTitle = groupByScannerTitle(currFindings);
|
||||||
|
|
||||||
|
const newFindings = [];
|
||||||
|
const resolvedFindings = [];
|
||||||
|
const unchangedFindings = [];
|
||||||
|
const movedFindings = [];
|
||||||
|
|
||||||
|
const matchedBaseKeys = new Set();
|
||||||
|
const matchedCurrKeys = new Set();
|
||||||
|
|
||||||
|
// Pass 1: exact matches (scanner+title+file)
|
||||||
|
for (const [key, currList] of currByKey.entries()) {
|
||||||
|
const baseList = baseByKey.get(key);
|
||||||
|
if (baseList && baseList.length > 0) {
|
||||||
|
// Match as many as possible
|
||||||
|
const matchCount = Math.min(baseList.length, currList.length);
|
||||||
|
for (let i = 0; i < matchCount; i++) {
|
||||||
|
unchangedFindings.push(currList[i]);
|
||||||
|
}
|
||||||
|
// Extra in current = new
|
||||||
|
for (let i = matchCount; i < currList.length; i++) {
|
||||||
|
newFindings.push(currList[i]);
|
||||||
|
}
|
||||||
|
matchedBaseKeys.add(key);
|
||||||
|
matchedCurrKeys.add(key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pass 2: find moved findings (same scanner+title, different file)
|
||||||
|
const resolvedCandidates = [];
|
||||||
|
const newCandidates = [];
|
||||||
|
|
||||||
|
for (const [key, baseList] of baseByKey.entries()) {
|
||||||
|
if (!matchedBaseKeys.has(key)) {
|
||||||
|
resolvedCandidates.push(...baseList);
|
||||||
|
} else {
|
||||||
|
// Any extras in baseline beyond matched count
|
||||||
|
const currList = currByKey.get(key) || [];
|
||||||
|
const matchCount = Math.min(baseList.length, currList.length);
|
||||||
|
for (let i = matchCount; i < baseList.length; i++) {
|
||||||
|
resolvedCandidates.push(baseList[i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [key, currList] of currByKey.entries()) {
|
||||||
|
if (!matchedCurrKeys.has(key)) {
|
||||||
|
newCandidates.push(...currList);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to pair resolved candidates with new candidates as "moved"
|
||||||
|
const usedResolved = new Set();
|
||||||
|
const usedNew = new Set();
|
||||||
|
|
||||||
|
for (let i = 0; i < newCandidates.length; i++) {
|
||||||
|
const curr = newCandidates[i];
|
||||||
|
for (let j = 0; j < resolvedCandidates.length; j++) {
|
||||||
|
if (usedResolved.has(j)) continue;
|
||||||
|
const base = resolvedCandidates[j];
|
||||||
|
if (base.scanner === curr.scanner && base.title === curr.title && base.file !== curr.file) {
|
||||||
|
movedFindings.push({ from: base, to: curr });
|
||||||
|
usedResolved.add(j);
|
||||||
|
usedNew.add(i);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remaining unmatched
|
||||||
|
for (let i = 0; i < resolvedCandidates.length; i++) {
|
||||||
|
if (!usedResolved.has(i)) resolvedFindings.push(resolvedCandidates[i]);
|
||||||
|
}
|
||||||
|
for (let i = 0; i < newCandidates.length; i++) {
|
||||||
|
if (!usedNew.has(i)) newFindings.push(newCandidates[i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Score changes
|
||||||
|
const baseAreas = scoreByArea(baseline.scanners || []);
|
||||||
|
const currAreas = scoreByArea(current.scanners || []);
|
||||||
|
|
||||||
|
const baseAvg = avgScore(baseAreas.areas);
|
||||||
|
const currAvg = avgScore(currAreas.areas);
|
||||||
|
|
||||||
|
const scoreChange = {
|
||||||
|
before: { score: baseAvg, grade: gradeFromPassRate(baseAvg) },
|
||||||
|
after: { score: currAvg, grade: gradeFromPassRate(currAvg) },
|
||||||
|
delta: currAvg - baseAvg,
|
||||||
|
};
|
||||||
|
|
||||||
|
// Per-area changes
|
||||||
|
const areaChanges = buildAreaChanges(baseAreas.areas, currAreas.areas);
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
const totalBefore = baseFindings.length;
|
||||||
|
const totalAfter = currFindings.length;
|
||||||
|
const newCount = newFindings.length;
|
||||||
|
const resolvedCount = resolvedFindings.length;
|
||||||
|
|
||||||
|
let trend = 'stable';
|
||||||
|
if (resolvedCount > newCount) trend = 'improving';
|
||||||
|
else if (newCount > resolvedCount) trend = 'degrading';
|
||||||
|
|
||||||
|
return {
|
||||||
|
newFindings,
|
||||||
|
resolvedFindings,
|
||||||
|
unchangedFindings,
|
||||||
|
movedFindings,
|
||||||
|
scoreChange,
|
||||||
|
areaChanges,
|
||||||
|
summary: {
|
||||||
|
totalBefore,
|
||||||
|
totalAfter,
|
||||||
|
newCount,
|
||||||
|
resolvedCount,
|
||||||
|
trend,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Format a diff result into a human-readable terminal report.
|
||||||
|
* @param {object} diff - Output from diffEnvelopes()
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function formatDiffReport(diff) {
|
||||||
|
const lines = [];
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push(' Config-Audit Drift Report');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Trend
|
||||||
|
const trendIcon = diff.summary.trend === 'improving' ? '↑'
|
||||||
|
: diff.summary.trend === 'degrading' ? '↓' : '→';
|
||||||
|
const trendLabel = diff.summary.trend.charAt(0).toUpperCase() + diff.summary.trend.slice(1);
|
||||||
|
lines.push(` Trend: ${trendIcon} ${trendLabel}`);
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Score
|
||||||
|
const sc = diff.scoreChange;
|
||||||
|
const deltaSign = sc.delta > 0 ? '+' : '';
|
||||||
|
lines.push(` Score: ${sc.before.grade} (${sc.before.score}) → ${sc.after.grade} (${sc.after.score}) ${trendIcon} ${deltaSign}${sc.delta} points`);
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// New findings
|
||||||
|
if (diff.newFindings.length > 0) {
|
||||||
|
lines.push(` New findings (${diff.newFindings.length}):`);
|
||||||
|
for (const f of diff.newFindings) {
|
||||||
|
const fileInfo = f.file ? ` (${f.file})` : '';
|
||||||
|
lines.push(` - [${f.severity}] ${f.title}${fileInfo}`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Resolved
|
||||||
|
if (diff.resolvedFindings.length > 0) {
|
||||||
|
lines.push(` Resolved (${diff.resolvedFindings.length}):`);
|
||||||
|
for (const f of diff.resolvedFindings) {
|
||||||
|
lines.push(` - [${f.severity}] ${f.title}`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Moved
|
||||||
|
if (diff.movedFindings.length > 0) {
|
||||||
|
lines.push(` Moved (${diff.movedFindings.length}):`);
|
||||||
|
for (const m of diff.movedFindings) {
|
||||||
|
lines.push(` - [${m.from.severity}] ${m.from.title}: ${m.from.file} → ${m.to.file}`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Area changes (only show areas with delta != 0)
|
||||||
|
const changedAreas = diff.areaChanges.filter(a => a.delta !== 0);
|
||||||
|
if (changedAreas.length > 0) {
|
||||||
|
lines.push(' Area changes:');
|
||||||
|
for (const a of changedAreas) {
|
||||||
|
const sign = a.delta > 0 ? '↑' : '↓';
|
||||||
|
const deltaStr = a.delta > 0 ? `+${a.delta}` : `${a.delta}`;
|
||||||
|
const padding = '.'.repeat(Math.max(1, 20 - a.name.length));
|
||||||
|
lines.push(` ${a.name} ${padding} ${a.before.grade} (${a.before.score}) → ${a.after.grade} (${a.after.score}) ${sign} ${deltaStr}`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unchanged summary
|
||||||
|
if (diff.unchangedFindings.length > 0) {
|
||||||
|
lines.push(` Unchanged: ${diff.unchangedFindings.length} finding(s)`);
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Internal helpers ---
|
||||||
|
|
||||||
|
function extractFindings(envelope) {
|
||||||
|
const findings = [];
|
||||||
|
for (const scanner of (envelope.scanners || [])) {
|
||||||
|
for (const f of (scanner.findings || [])) {
|
||||||
|
findings.push(f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return findings;
|
||||||
|
}
|
||||||
|
|
||||||
|
function findingKey(f) {
|
||||||
|
return `${f.scanner}::${f.title}::${f.file || ''}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function scannerTitleKey(f) {
|
||||||
|
return `${f.scanner}::${f.title}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function groupByKey(findings) {
|
||||||
|
const map = new Map();
|
||||||
|
for (const f of findings) {
|
||||||
|
const key = findingKey(f);
|
||||||
|
if (!map.has(key)) map.set(key, []);
|
||||||
|
map.get(key).push(f);
|
||||||
|
}
|
||||||
|
return map;
|
||||||
|
}
|
||||||
|
|
||||||
|
function groupByScannerTitle(findings) {
|
||||||
|
const map = new Map();
|
||||||
|
for (const f of findings) {
|
||||||
|
const key = scannerTitleKey(f);
|
||||||
|
if (!map.has(key)) map.set(key, []);
|
||||||
|
map.get(key).push(f);
|
||||||
|
}
|
||||||
|
return map;
|
||||||
|
}
|
||||||
|
|
||||||
|
function avgScore(areas) {
|
||||||
|
if (areas.length === 0) return 0;
|
||||||
|
return Math.round(areas.reduce((s, a) => s + a.score, 0) / areas.length);
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildAreaChanges(baseAreas, currAreas) {
|
||||||
|
const baseMap = new Map(baseAreas.map(a => [a.name, a]));
|
||||||
|
const currMap = new Map(currAreas.map(a => [a.name, a]));
|
||||||
|
|
||||||
|
const allNames = new Set([...baseMap.keys(), ...currMap.keys()]);
|
||||||
|
const changes = [];
|
||||||
|
|
||||||
|
for (const name of allNames) {
|
||||||
|
const before = baseMap.get(name) || { score: 0, grade: 'F' };
|
||||||
|
const after = currMap.get(name) || { score: 0, grade: 'F' };
|
||||||
|
changes.push({
|
||||||
|
name,
|
||||||
|
before: { score: before.score, grade: before.grade },
|
||||||
|
after: { score: after.score, grade: after.grade },
|
||||||
|
delta: after.score - before.score,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return changes;
|
||||||
|
}
|
||||||
308
plugins/config-audit/scanners/lib/file-discovery.mjs
Normal file
308
plugins/config-audit/scanners/lib/file-discovery.mjs
Normal file
|
|
@ -0,0 +1,308 @@
|
||||||
|
/**
|
||||||
|
* Config file discovery for config-audit.
|
||||||
|
* Finds CLAUDE.md, settings.json, hooks.json, .mcp.json, rules/, plugin.json, etc.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readdir, stat, readFile } from 'node:fs/promises';
|
||||||
|
import { join, resolve, relative, extname, basename, dirname, sep } from 'node:path';
|
||||||
|
|
||||||
|
const SKIP_DIRS = new Set([
|
||||||
|
'node_modules', '.git', 'dist', 'build', 'coverage', '__pycache__',
|
||||||
|
'.next', '.nuxt', '.output', '.cache', '.turbo', '.parcel-cache',
|
||||||
|
'vendor', 'venv', '.venv', '.tox',
|
||||||
|
]);
|
||||||
|
|
||||||
|
/** Config file patterns to discover */
|
||||||
|
const CONFIG_PATTERNS = {
|
||||||
|
claudeMd: /^CLAUDE\.md$|^CLAUDE\.local\.md$/i,
|
||||||
|
settingsJson: /^settings\.json$|^settings\.local\.json$/,
|
||||||
|
mcpJson: /^\.mcp\.json$/,
|
||||||
|
pluginJson: /^plugin\.json$/,
|
||||||
|
hooksJson: /^hooks\.json$/,
|
||||||
|
rulesDir: /^rules$/,
|
||||||
|
agentsMd: /\.md$/,
|
||||||
|
commandsMd: /\.md$/,
|
||||||
|
skillsMd: /^SKILL\.md$/i,
|
||||||
|
keybindings: /^keybindings\.json$/,
|
||||||
|
claudeJson: /^\.claude\.json$/,
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Discover all Claude Code config files under a target path.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {number} [opts.maxFiles=500] - max files to return
|
||||||
|
* @param {boolean} [opts.includeGlobal=false] - also scan ~/.claude/
|
||||||
|
* @returns {Promise<{ files: ConfigFile[], skipped: number }>}
|
||||||
|
*
|
||||||
|
* @typedef {{ absPath: string, relPath: string, type: string, scope: string, size: number }} ConfigFile
|
||||||
|
*/
|
||||||
|
export async function discoverConfigFiles(targetPath, opts = {}) {
|
||||||
|
const maxFiles = opts.maxFiles || 2000;
|
||||||
|
const maxDepth = opts.maxDepth || 10;
|
||||||
|
const files = [];
|
||||||
|
const skippedRef = { count: 0 };
|
||||||
|
|
||||||
|
await walkForConfig(targetPath, targetPath, files, skippedRef, maxFiles, undefined, maxDepth);
|
||||||
|
|
||||||
|
if (opts.includeGlobal) {
|
||||||
|
const home = process.env.HOME || process.env.USERPROFILE || '';
|
||||||
|
const claudeDir = join(home, '.claude');
|
||||||
|
try {
|
||||||
|
await stat(claudeDir);
|
||||||
|
await walkForConfig(claudeDir, claudeDir, files, skippedRef, maxFiles, 'user', maxDepth);
|
||||||
|
} catch { /* .claude dir doesn't exist */ }
|
||||||
|
|
||||||
|
// ~/.claude.json
|
||||||
|
const claudeJson = join(home, '.claude.json');
|
||||||
|
try {
|
||||||
|
const s = await stat(claudeJson);
|
||||||
|
files.push({
|
||||||
|
absPath: claudeJson,
|
||||||
|
relPath: '.claude.json',
|
||||||
|
type: 'claude-json',
|
||||||
|
scope: 'user',
|
||||||
|
size: s.size,
|
||||||
|
});
|
||||||
|
} catch { /* doesn't exist */ }
|
||||||
|
}
|
||||||
|
|
||||||
|
return { files, skipped: skippedRef.count };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Walk directory tree looking for config files.
|
||||||
|
*/
|
||||||
|
async function walkForConfig(dir, basePath, files, skippedRef, maxFiles, forceScope, maxDepth) {
|
||||||
|
if (files.length >= maxFiles) return;
|
||||||
|
|
||||||
|
let entries;
|
||||||
|
try {
|
||||||
|
entries = await readdir(dir, { withFileTypes: true });
|
||||||
|
} catch {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (files.length >= maxFiles) break;
|
||||||
|
const fullPath = join(dir, entry.name);
|
||||||
|
const rel = relative(basePath, fullPath);
|
||||||
|
|
||||||
|
if (entry.isDirectory()) {
|
||||||
|
if (SKIP_DIRS.has(entry.name)) {
|
||||||
|
skippedRef.count++;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for .claude directory (contains settings, rules, etc.)
|
||||||
|
if (entry.name === '.claude' || entry.name === '.claude-plugin') {
|
||||||
|
await walkForConfig(fullPath, basePath, files, skippedRef, maxFiles, forceScope, maxDepth);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for rules/ inside .claude
|
||||||
|
if (entry.name === 'rules' && dirname(rel).includes('.claude')) {
|
||||||
|
await walkRulesDir(fullPath, basePath, files, maxFiles, forceScope || classifyScope(rel, basePath));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for agents/, commands/, skills/, hooks/ dirs
|
||||||
|
if (['agents', 'commands', 'skills', 'hooks'].includes(entry.name)) {
|
||||||
|
await walkForConfig(fullPath, basePath, files, skippedRef, maxFiles, forceScope, maxDepth);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Recurse into subdirectories (configurable depth limit)
|
||||||
|
const depth = rel.split(sep).length;
|
||||||
|
if (depth < maxDepth) {
|
||||||
|
await walkForConfig(fullPath, basePath, files, skippedRef, maxFiles, forceScope, maxDepth);
|
||||||
|
}
|
||||||
|
} else if (entry.isFile()) {
|
||||||
|
const fileType = classifyFile(entry.name, rel);
|
||||||
|
if (fileType) {
|
||||||
|
let s;
|
||||||
|
try {
|
||||||
|
s = await stat(fullPath);
|
||||||
|
} catch {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
files.push({
|
||||||
|
absPath: fullPath,
|
||||||
|
relPath: rel,
|
||||||
|
type: fileType,
|
||||||
|
scope: forceScope || classifyScope(rel, basePath),
|
||||||
|
size: s.size,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Walk a rules directory and collect all files (including non-.md for validation).
|
||||||
|
*/
|
||||||
|
async function walkRulesDir(dir, basePath, files, maxFiles, scope) {
|
||||||
|
let entries;
|
||||||
|
try {
|
||||||
|
entries = await readdir(dir, { withFileTypes: true });
|
||||||
|
} catch {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (files.length >= maxFiles) break;
|
||||||
|
const fullPath = join(dir, entry.name);
|
||||||
|
if (entry.isFile()) {
|
||||||
|
let s;
|
||||||
|
try {
|
||||||
|
s = await stat(fullPath);
|
||||||
|
} catch {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
files.push({
|
||||||
|
absPath: fullPath,
|
||||||
|
relPath: relative(basePath, fullPath),
|
||||||
|
type: 'rule',
|
||||||
|
scope,
|
||||||
|
size: s.size,
|
||||||
|
});
|
||||||
|
} else if (entry.isDirectory()) {
|
||||||
|
await walkRulesDir(fullPath, basePath, files, maxFiles, scope);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Classify a file by name and path.
|
||||||
|
* @returns {string | null}
|
||||||
|
*/
|
||||||
|
function classifyFile(name, relPath) {
|
||||||
|
if (CONFIG_PATTERNS.claudeMd.test(name)) return 'claude-md';
|
||||||
|
if (name === 'settings.json' || name === 'settings.local.json') {
|
||||||
|
if (relPath.includes('.claude')) return 'settings-json';
|
||||||
|
}
|
||||||
|
if (name === '.mcp.json') return 'mcp-json';
|
||||||
|
if (name === 'plugin.json' && relPath.includes('.claude-plugin')) return 'plugin-json';
|
||||||
|
if (name === 'hooks.json' && relPath.includes('hooks')) return 'hooks-json';
|
||||||
|
if (name === 'keybindings.json') return 'keybindings-json';
|
||||||
|
if (name === '.claude.json') return 'claude-json';
|
||||||
|
|
||||||
|
// Agent/command/skill markdown files
|
||||||
|
if (name.endsWith('.md') && relPath.includes(`agents${sep}`)) return 'agent-md';
|
||||||
|
if (name.endsWith('.md') && relPath.includes(`commands${sep}`)) return 'command-md';
|
||||||
|
if (/^SKILL\.md$/i.test(name)) return 'skill-md';
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determine the scope of a config file.
|
||||||
|
* @returns {'managed' | 'user' | 'project' | 'local' | 'plugin'}
|
||||||
|
*/
|
||||||
|
function classifyScope(relPath, basePath) {
|
||||||
|
if (relPath.includes('managed-settings')) return 'managed';
|
||||||
|
if (basePath.includes(`.claude${sep}plugins`)) return 'plugin';
|
||||||
|
if (relPath.includes('.local.')) return 'local';
|
||||||
|
const home = process.env.HOME || process.env.USERPROFILE || '';
|
||||||
|
if (basePath.startsWith(join(home, '.claude'))) return 'user';
|
||||||
|
return 'project';
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Common developer directory names under $HOME */
|
||||||
|
const DEV_DIRS = ['repos', 'projects', 'src', 'code', 'dev', 'work', 'Sites', 'Developer'];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Discover all root paths for a full-machine scan.
|
||||||
|
* Only returns paths that actually exist on the filesystem.
|
||||||
|
* @returns {Promise<Array<{ path: string, maxDepth: number }>>}
|
||||||
|
*/
|
||||||
|
export async function discoverFullMachinePaths() {
|
||||||
|
const home = process.env.HOME || process.env.USERPROFILE || '';
|
||||||
|
const candidates = [
|
||||||
|
// ~/.claude — deepest (plugins can be 6+ levels deep)
|
||||||
|
{ path: join(home, '.claude'), maxDepth: 10 },
|
||||||
|
// Managed system paths
|
||||||
|
{ path: '/Library/Application Support/ClaudeCode', maxDepth: 5 },
|
||||||
|
{ path: '/etc/claude-code', maxDepth: 5 },
|
||||||
|
// Common developer directories
|
||||||
|
...DEV_DIRS.map(d => ({ path: join(home, d), maxDepth: 5 })),
|
||||||
|
];
|
||||||
|
|
||||||
|
const existing = [];
|
||||||
|
for (const c of candidates) {
|
||||||
|
try {
|
||||||
|
const s = await stat(c.path);
|
||||||
|
if (s.isDirectory()) existing.push(c);
|
||||||
|
} catch { /* not present */ }
|
||||||
|
}
|
||||||
|
return existing;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Discover config files across multiple root paths.
|
||||||
|
* Calls discoverConfigFiles() per root with correct basePath (preserves scope/relPath).
|
||||||
|
* Deduplicates files by absPath — first occurrence wins.
|
||||||
|
* @param {Array<{ path: string, maxDepth: number }>} roots
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {number} [opts.maxFiles=2000] - global max across all roots
|
||||||
|
* @returns {Promise<{ files: ConfigFile[], skipped: number }>}
|
||||||
|
*/
|
||||||
|
export async function discoverConfigFilesMulti(roots, opts = {}) {
|
||||||
|
const maxFiles = opts.maxFiles || 2000;
|
||||||
|
const seen = new Set();
|
||||||
|
const allFiles = [];
|
||||||
|
let totalSkipped = 0;
|
||||||
|
|
||||||
|
for (const root of roots) {
|
||||||
|
if (allFiles.length >= maxFiles) break;
|
||||||
|
|
||||||
|
const result = await discoverConfigFiles(root.path, {
|
||||||
|
maxFiles: maxFiles - allFiles.length,
|
||||||
|
maxDepth: root.maxDepth,
|
||||||
|
});
|
||||||
|
|
||||||
|
totalSkipped += result.skipped;
|
||||||
|
|
||||||
|
for (const f of result.files) {
|
||||||
|
if (!seen.has(f.absPath)) {
|
||||||
|
seen.add(f.absPath);
|
||||||
|
allFiles.push(f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle ~/.claude.json separately (single file, not a directory)
|
||||||
|
const home = process.env.HOME || process.env.USERPROFILE || '';
|
||||||
|
const claudeJson = join(home, '.claude.json');
|
||||||
|
if (allFiles.length < maxFiles && !seen.has(claudeJson)) {
|
||||||
|
try {
|
||||||
|
const s = await stat(claudeJson);
|
||||||
|
allFiles.push({
|
||||||
|
absPath: claudeJson,
|
||||||
|
relPath: '.claude.json',
|
||||||
|
type: 'claude-json',
|
||||||
|
scope: 'user',
|
||||||
|
size: s.size,
|
||||||
|
});
|
||||||
|
} catch { /* doesn't exist */ }
|
||||||
|
}
|
||||||
|
|
||||||
|
return { files: allFiles, skipped: totalSkipped };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Read a file as UTF-8 text. Returns null on error or if binary.
|
||||||
|
* @param {string} absPath
|
||||||
|
* @returns {Promise<string | null>}
|
||||||
|
*/
|
||||||
|
export async function readTextFile(absPath) {
|
||||||
|
try {
|
||||||
|
const content = await readFile(absPath, 'utf-8');
|
||||||
|
// Check for binary (null bytes in first 8KB)
|
||||||
|
const sample = content.slice(0, 8192);
|
||||||
|
if (sample.includes('\0')) return null;
|
||||||
|
return content;
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
121
plugins/config-audit/scanners/lib/output.mjs
Normal file
121
plugins/config-audit/scanners/lib/output.mjs
Normal file
|
|
@ -0,0 +1,121 @@
|
||||||
|
/**
|
||||||
|
* Finding and result builders for config-audit scanners.
|
||||||
|
* Finding IDs: CA-{SCANNER}-{NNN} (e.g. CA-CML-001)
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { riskScore, riskBand, verdict } from './severity.mjs';
|
||||||
|
|
||||||
|
let findingCounter = 0;
|
||||||
|
|
||||||
|
/** Reset the finding counter. Call in beforeEach of tests and before each scanner run. */
|
||||||
|
export function resetCounter() {
|
||||||
|
findingCounter = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a finding object with auto-incremented ID.
|
||||||
|
* @param {object} opts
|
||||||
|
* @param {string} opts.scanner - 3-letter scanner prefix (CML, SET, HKV, RUL, etc.)
|
||||||
|
* @param {string} opts.severity - critical | high | medium | low | info
|
||||||
|
* @param {string} opts.title
|
||||||
|
* @param {string} opts.description
|
||||||
|
* @param {string} [opts.file] - file path where finding was detected
|
||||||
|
* @param {number} [opts.line] - line number
|
||||||
|
* @param {string} [opts.evidence] - relevant snippet
|
||||||
|
* @param {string} [opts.category] - quality category
|
||||||
|
* @param {string} [opts.recommendation] - suggested fix
|
||||||
|
* @param {boolean} [opts.autoFixable] - can be auto-fixed
|
||||||
|
* @returns {object}
|
||||||
|
*/
|
||||||
|
export function finding(opts) {
|
||||||
|
findingCounter++;
|
||||||
|
const id = `CA-${opts.scanner}-${String(findingCounter).padStart(3, '0')}`;
|
||||||
|
return {
|
||||||
|
id,
|
||||||
|
scanner: opts.scanner,
|
||||||
|
severity: opts.severity,
|
||||||
|
title: opts.title,
|
||||||
|
description: opts.description,
|
||||||
|
file: opts.file || null,
|
||||||
|
line: opts.line || null,
|
||||||
|
evidence: opts.evidence || null,
|
||||||
|
category: opts.category || null,
|
||||||
|
recommendation: opts.recommendation || null,
|
||||||
|
autoFixable: opts.autoFixable || false,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a scanner result envelope.
|
||||||
|
* @param {string} scannerName - 3-letter prefix
|
||||||
|
* @param {'ok' | 'error' | 'skipped'} status
|
||||||
|
* @param {object[]} findings
|
||||||
|
* @param {number} filesScanned
|
||||||
|
* @param {number} durationMs
|
||||||
|
* @param {string} [errorMsg]
|
||||||
|
* @returns {object}
|
||||||
|
*/
|
||||||
|
export function scannerResult(scannerName, status, findings, filesScanned, durationMs, errorMsg) {
|
||||||
|
const counts = { critical: 0, high: 0, medium: 0, low: 0, info: 0 };
|
||||||
|
for (const f of findings) {
|
||||||
|
if (counts[f.severity] !== undefined) {
|
||||||
|
counts[f.severity]++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const result = {
|
||||||
|
scanner: scannerName,
|
||||||
|
status,
|
||||||
|
files_scanned: filesScanned,
|
||||||
|
duration_ms: durationMs,
|
||||||
|
findings,
|
||||||
|
counts,
|
||||||
|
};
|
||||||
|
if (errorMsg) result.error = errorMsg;
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create the top-level output envelope combining all scanner results.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {object[]} scannerResults
|
||||||
|
* @param {number} totalDurationMs
|
||||||
|
* @returns {object}
|
||||||
|
*/
|
||||||
|
export function envelope(targetPath, scannerResults, totalDurationMs) {
|
||||||
|
const aggregate = { critical: 0, high: 0, medium: 0, low: 0, info: 0 };
|
||||||
|
let totalFindings = 0;
|
||||||
|
let scannersOk = 0;
|
||||||
|
let scannersError = 0;
|
||||||
|
let scannersSkipped = 0;
|
||||||
|
|
||||||
|
for (const r of scannerResults) {
|
||||||
|
for (const sev of Object.keys(aggregate)) {
|
||||||
|
aggregate[sev] += (r.counts[sev] || 0);
|
||||||
|
}
|
||||||
|
totalFindings += r.findings.length;
|
||||||
|
if (r.status === 'ok') scannersOk++;
|
||||||
|
else if (r.status === 'error') scannersError++;
|
||||||
|
else if (r.status === 'skipped') scannersSkipped++;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
meta: {
|
||||||
|
target: targetPath,
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
version: '2.2.0',
|
||||||
|
tool: 'config-audit',
|
||||||
|
},
|
||||||
|
scanners: scannerResults,
|
||||||
|
aggregate: {
|
||||||
|
total_findings: totalFindings,
|
||||||
|
counts: aggregate,
|
||||||
|
risk_score: riskScore(aggregate),
|
||||||
|
risk_band: riskBand(riskScore(aggregate)),
|
||||||
|
verdict: verdict(aggregate),
|
||||||
|
scanners_ok: scannersOk,
|
||||||
|
scanners_error: scannersError,
|
||||||
|
scanners_skipped: scannersSkipped,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
278
plugins/config-audit/scanners/lib/report-generator.mjs
Normal file
278
plugins/config-audit/scanners/lib/report-generator.mjs
Normal file
|
|
@ -0,0 +1,278 @@
|
||||||
|
/**
|
||||||
|
* Unified report generator for config-audit.
|
||||||
|
* Produces markdown reports from posture, drift, and plugin health results.
|
||||||
|
* Template strings are embedded in JS — no separate .md files to parse.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
const MAX_FINDINGS_PER_SCANNER = 10;
|
||||||
|
const MAX_REPORT_LINES = 500;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a posture report in markdown.
|
||||||
|
* @param {object} postureResult - Output from runPosture()
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function generatePostureReport(postureResult) {
|
||||||
|
const {
|
||||||
|
areas, overallGrade, scannerEnvelope,
|
||||||
|
} = postureResult;
|
||||||
|
const opportunityCount = postureResult.opportunityCount ?? 0;
|
||||||
|
|
||||||
|
// Quality areas only (exclude Feature Coverage)
|
||||||
|
const qualityAreas = areas.filter(a => a.name !== 'Feature Coverage');
|
||||||
|
const avgScore = qualityAreas.length > 0
|
||||||
|
? Math.round(qualityAreas.reduce((s, a) => s + a.score, 0) / qualityAreas.length)
|
||||||
|
: 0;
|
||||||
|
|
||||||
|
const lines = [];
|
||||||
|
const ts = scannerEnvelope?.meta?.timestamp || new Date().toISOString();
|
||||||
|
const target = scannerEnvelope?.meta?.target || 'unknown';
|
||||||
|
|
||||||
|
lines.push('## Health Assessment');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(`> **Date:** ${ts.split('T')[0]} `);
|
||||||
|
lines.push(`> **Target:** \`${target}\` `);
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Score summary
|
||||||
|
lines.push('### Score Summary');
|
||||||
|
lines.push('');
|
||||||
|
lines.push('| Metric | Value |');
|
||||||
|
lines.push('|--------|-------|');
|
||||||
|
lines.push(`| Health Grade | **${overallGrade}** (${avgScore}/100) |`);
|
||||||
|
lines.push(`| Areas Scanned | ${qualityAreas.length} |`);
|
||||||
|
if (opportunityCount > 0) {
|
||||||
|
lines.push(`| Opportunities | ${opportunityCount} features available |`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Area breakdown
|
||||||
|
lines.push('### Area Breakdown');
|
||||||
|
lines.push('');
|
||||||
|
lines.push('| Area | Grade | Score | Findings |');
|
||||||
|
lines.push('|------|-------|-------|----------|');
|
||||||
|
for (const a of qualityAreas) {
|
||||||
|
lines.push(`| ${a.name} | ${a.grade} | ${a.score} | ${a.findingCount} |`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Opportunities pointer (replaces Top Actions)
|
||||||
|
if (opportunityCount > 0) {
|
||||||
|
lines.push(`> Run \`/config-audit feature-gap\` for ${opportunityCount} context-aware recommendations.`);
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Findings per scanner (collapsed)
|
||||||
|
if (scannerEnvelope?.scanners) {
|
||||||
|
lines.push('### Findings by Scanner');
|
||||||
|
lines.push('');
|
||||||
|
for (const sr of scannerEnvelope.scanners) {
|
||||||
|
if (sr.findings.length === 0) continue;
|
||||||
|
lines.push(`<details>`);
|
||||||
|
lines.push(`<summary>${sr.scanner} — ${sr.findings.length} finding(s)</summary>`);
|
||||||
|
lines.push('');
|
||||||
|
const show = sr.findings.slice(0, MAX_FINDINGS_PER_SCANNER);
|
||||||
|
for (const f of show) {
|
||||||
|
lines.push(`- \`[${f.severity}]\` ${f.title}${f.file ? ` (${f.file})` : ''}`);
|
||||||
|
}
|
||||||
|
if (sr.findings.length > MAX_FINDINGS_PER_SCANNER) {
|
||||||
|
lines.push(`- _...and ${sr.findings.length - MAX_FINDINGS_PER_SCANNER} more_`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
lines.push('</details>');
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a drift report in markdown.
|
||||||
|
* @param {object} diffResult - Output from diffEnvelopes()
|
||||||
|
* @param {string} baselineName - Name of baseline used
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function generateDriftReport(diffResult, baselineName) {
|
||||||
|
const lines = [];
|
||||||
|
const { summary, scoreChange, newFindings, resolvedFindings, areaChanges } = diffResult;
|
||||||
|
|
||||||
|
const trendIcon = summary.trend === 'improving' ? '↑'
|
||||||
|
: summary.trend === 'degrading' ? '↓' : '→';
|
||||||
|
const trendLabel = summary.trend.charAt(0).toUpperCase() + summary.trend.slice(1);
|
||||||
|
|
||||||
|
lines.push('## Drift Report');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(`> **Baseline:** \`${baselineName}\` `);
|
||||||
|
lines.push(`> **Trend:** ${trendIcon} ${trendLabel} `);
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Score delta
|
||||||
|
const sc = scoreChange;
|
||||||
|
const deltaSign = sc.delta > 0 ? '+' : '';
|
||||||
|
lines.push('### Score Change');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(`**${sc.before.grade}** (${sc.before.score}) ${trendIcon} **${sc.after.grade}** (${sc.after.score}) — ${deltaSign}${sc.delta} points`);
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// New findings
|
||||||
|
if (newFindings.length > 0) {
|
||||||
|
lines.push('### New Findings');
|
||||||
|
lines.push('');
|
||||||
|
lines.push('| Severity | Title | File |');
|
||||||
|
lines.push('|----------|-------|------|');
|
||||||
|
for (const f of newFindings.slice(0, 20)) {
|
||||||
|
lines.push(`| \`${f.severity}\` | ${f.title} | ${f.file || '-'} |`);
|
||||||
|
}
|
||||||
|
if (newFindings.length > 20) {
|
||||||
|
lines.push(`| | _...and ${newFindings.length - 20} more_ | |`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Resolved findings
|
||||||
|
if (resolvedFindings.length > 0) {
|
||||||
|
lines.push('### Resolved Findings');
|
||||||
|
lines.push('');
|
||||||
|
lines.push('| Severity | Title |');
|
||||||
|
lines.push('|----------|-------|');
|
||||||
|
for (const f of resolvedFindings.slice(0, 20)) {
|
||||||
|
lines.push(`| \`${f.severity}\` | ${f.title} |`);
|
||||||
|
}
|
||||||
|
if (resolvedFindings.length > 20) {
|
||||||
|
lines.push(`| | _...and ${resolvedFindings.length - 20} more_ |`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Area changes
|
||||||
|
const changed = (areaChanges || []).filter(a => a.delta !== 0);
|
||||||
|
if (changed.length > 0) {
|
||||||
|
lines.push('### Area Changes');
|
||||||
|
lines.push('');
|
||||||
|
lines.push('| Area | Before | After | Delta |');
|
||||||
|
lines.push('|------|--------|-------|-------|');
|
||||||
|
for (const a of changed) {
|
||||||
|
const sign = a.delta > 0 ? '+' : '';
|
||||||
|
lines.push(`| ${a.name} | ${a.before.grade} (${a.before.score}) | ${a.after.grade} (${a.after.score}) | ${sign}${a.delta} |`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a plugin health report in markdown.
|
||||||
|
* @param {object} scanResult - Scanner result from plugin-health-scanner scan()
|
||||||
|
* @param {Array<{ name: string, findings: object[], commandCount: number, agentCount: number }>} pluginResults
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function generatePluginHealthReport(scanResult, pluginResults) {
|
||||||
|
const lines = [];
|
||||||
|
|
||||||
|
lines.push('## Plugin Health');
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
if (!pluginResults || pluginResults.length === 0) {
|
||||||
|
lines.push('_No plugins found._');
|
||||||
|
lines.push('');
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Plugin summary table
|
||||||
|
lines.push('| Plugin | Grade | Score | Commands | Agents | Issues |');
|
||||||
|
lines.push('|--------|-------|-------|----------|--------|--------|');
|
||||||
|
for (const p of pluginResults) {
|
||||||
|
const issueCount = p.findings.length;
|
||||||
|
const score = Math.max(0, 100 - issueCount * 10);
|
||||||
|
const grade = score >= 90 ? 'A' : score >= 75 ? 'B' : score >= 60 ? 'C' : score >= 40 ? 'D' : 'F';
|
||||||
|
lines.push(`| ${p.name} | ${grade} | ${score} | ${p.commandCount} | ${p.agentCount} | ${issueCount} |`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Per-plugin findings
|
||||||
|
for (const p of pluginResults) {
|
||||||
|
if (p.findings.length === 0) continue;
|
||||||
|
lines.push(`<details>`);
|
||||||
|
lines.push(`<summary>${p.name} — ${p.findings.length} issue(s)</summary>`);
|
||||||
|
lines.push('');
|
||||||
|
for (const f of p.findings.slice(0, MAX_FINDINGS_PER_SCANNER)) {
|
||||||
|
lines.push(`- \`[${f.severity}]\` ${f.title}`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
lines.push('</details>');
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cross-plugin issues (from scanResult.findings where title contains "Cross-plugin")
|
||||||
|
const crossPlugin = (scanResult?.findings || []).filter(f => f.title.includes('Cross-plugin'));
|
||||||
|
if (crossPlugin.length > 0) {
|
||||||
|
lines.push('### Cross-Plugin Issues');
|
||||||
|
lines.push('');
|
||||||
|
for (const f of crossPlugin) {
|
||||||
|
lines.push(`- \`[${f.severity}]\` ${f.title}: ${f.description}`);
|
||||||
|
}
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a unified full report combining all sections.
|
||||||
|
* Each input is optional (null = skip that section).
|
||||||
|
* @param {object|null} postureResult - From runPosture()
|
||||||
|
* @param {object|null} driftResult - { diff, baselineName } from diffEnvelopes()
|
||||||
|
* @param {object|null} pluginHealthResult - { scanResult, pluginResults } from plugin-health-scanner
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function generateFullReport(postureResult, driftResult, pluginHealthResult) {
|
||||||
|
const lines = [];
|
||||||
|
|
||||||
|
lines.push('# Config-Audit Report');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(`_Generated: ${new Date().toISOString().split('T')[0]}_`);
|
||||||
|
lines.push('');
|
||||||
|
lines.push('---');
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
if (postureResult) {
|
||||||
|
lines.push(generatePostureReport(postureResult));
|
||||||
|
lines.push('---');
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (driftResult) {
|
||||||
|
lines.push(generateDriftReport(driftResult.diff, driftResult.baselineName));
|
||||||
|
lines.push('---');
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pluginHealthResult) {
|
||||||
|
lines.push(generatePluginHealthReport(
|
||||||
|
pluginHealthResult.scanResult,
|
||||||
|
pluginHealthResult.pluginResults,
|
||||||
|
));
|
||||||
|
lines.push('---');
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!postureResult && !driftResult && !pluginHealthResult) {
|
||||||
|
lines.push('_No data provided for report._');
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Truncate if over limit
|
||||||
|
const result = lines.join('\n');
|
||||||
|
const resultLines = result.split('\n');
|
||||||
|
if (resultLines.length > MAX_REPORT_LINES) {
|
||||||
|
const truncated = resultLines.slice(0, MAX_REPORT_LINES);
|
||||||
|
truncated.push('');
|
||||||
|
truncated.push(`_Report truncated at ${MAX_REPORT_LINES} lines. Run individual reports for full details._`);
|
||||||
|
return truncated.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
310
plugins/config-audit/scanners/lib/scoring.mjs
Normal file
310
plugins/config-audit/scanners/lib/scoring.mjs
Normal file
|
|
@ -0,0 +1,310 @@
|
||||||
|
/**
|
||||||
|
* Scoring, maturity, and posture assessment for config-audit.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { gradeFromPassRate } from './severity.mjs';
|
||||||
|
|
||||||
|
// --- Tier weights for utilization calculation ---
|
||||||
|
const TIER_WEIGHTS = { t1: 3, t2: 2, t3: 1, t4: 1 };
|
||||||
|
const TIER_COUNTS = { t1: 5, t2: 7, t3: 8, t4: 5 };
|
||||||
|
const TOTAL_DIMENSIONS = 25;
|
||||||
|
const MAX_WEIGHTED = Object.entries(TIER_COUNTS).reduce(
|
||||||
|
(sum, [tier, count]) => sum + count * TIER_WEIGHTS[tier],
|
||||||
|
0,
|
||||||
|
); // 5*3 + 7*2 + 8*1 + 5*1 = 42
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate weighted utilization from GAP scanner findings.
|
||||||
|
* @param {object[]} gapFindings - Array of GAP scanner findings (each has .category = t1|t2|t3|t4)
|
||||||
|
* @param {number} [totalDimensions=25]
|
||||||
|
* @returns {{ score: number, overhang: number }}
|
||||||
|
*/
|
||||||
|
export function calculateUtilization(gapFindings, totalDimensions = TOTAL_DIMENSIONS) {
|
||||||
|
// Count gaps per tier
|
||||||
|
const gapsByTier = { t1: 0, t2: 0, t3: 0, t4: 0 };
|
||||||
|
for (const f of gapFindings) {
|
||||||
|
const tier = f.category;
|
||||||
|
if (tier in gapsByTier) gapsByTier[tier]++;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Present (non-gap) weight
|
||||||
|
let presentWeight = 0;
|
||||||
|
for (const [tier, totalCount] of Object.entries(TIER_COUNTS)) {
|
||||||
|
const presentCount = totalCount - gapsByTier[tier];
|
||||||
|
presentWeight += presentCount * TIER_WEIGHTS[tier];
|
||||||
|
}
|
||||||
|
|
||||||
|
const score = Math.round((presentWeight / MAX_WEIGHTED) * 100);
|
||||||
|
return { score, overhang: 100 - score };
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Maturity levels ---
|
||||||
|
const MATURITY_LEVELS = [
|
||||||
|
{ level: 0, name: 'Bare', description: 'No CLAUDE.md, default everything' },
|
||||||
|
{ level: 1, name: 'Configured', description: 'CLAUDE.md + basic settings' },
|
||||||
|
{ level: 2, name: 'Structured', description: 'Rules, skills, hooks' },
|
||||||
|
{ level: 3, name: 'Automated', description: 'MCP, custom agents, diverse hooks' },
|
||||||
|
{ level: 4, name: 'Governed', description: 'Plugins, managed settings, full monitoring' },
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determine config maturity level (threshold-based: highest level where ALL requirements met).
|
||||||
|
* @param {object[]} gapFindings - GAP scanner findings
|
||||||
|
* @param {{ files: Array<{ type: string, absPath?: string, scope?: string }> }} discovery
|
||||||
|
* @returns {{ level: number, name: string, description: string }}
|
||||||
|
*/
|
||||||
|
export function determineMaturityLevel(gapFindings, discovery) {
|
||||||
|
const gapIds = new Set(gapFindings.map(f => {
|
||||||
|
// Extract the gap check id from the title — match against known titles
|
||||||
|
return findGapId(f);
|
||||||
|
}));
|
||||||
|
|
||||||
|
const has = (id) => !gapIds.has(id); // feature is present if NOT in gaps
|
||||||
|
|
||||||
|
// Level 1: CLAUDE.md present
|
||||||
|
if (!has('t1_1')) return MATURITY_LEVELS[0];
|
||||||
|
|
||||||
|
// Level 2: Level 1 + permissions + hooks + (modular OR path-rules)
|
||||||
|
const level2 = has('t1_2') && has('t1_3') && (has('t2_2') || has('t2_3'));
|
||||||
|
if (!level2) return MATURITY_LEVELS[1];
|
||||||
|
|
||||||
|
// Level 3: Level 2 + MCP + hook diversity + custom subagents
|
||||||
|
const level3 = has('t1_5') && has('t2_5') && has('t2_6');
|
||||||
|
if (!level3) return MATURITY_LEVELS[2];
|
||||||
|
|
||||||
|
// Level 4: Level 3 + project MCP in git + custom plugin
|
||||||
|
const level4 = has('t4_1') && has('t4_2');
|
||||||
|
if (!level4) return MATURITY_LEVELS[3];
|
||||||
|
|
||||||
|
return MATURITY_LEVELS[4];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Map a GAP finding to its gap check ID based on known title→id mapping.
|
||||||
|
* @param {object} finding
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
function findGapId(finding) {
|
||||||
|
return TITLE_TO_ID[finding.title] || 'unknown';
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Title→ID mapping for all 25 gap checks */
|
||||||
|
const TITLE_TO_ID = {
|
||||||
|
'No CLAUDE.md file': 't1_1',
|
||||||
|
'No permissions configured': 't1_2',
|
||||||
|
'No hooks configured': 't1_3',
|
||||||
|
'No custom skills or commands': 't1_4',
|
||||||
|
'No MCP servers configured': 't1_5',
|
||||||
|
'Settings only at one scope': 't2_1',
|
||||||
|
'CLAUDE.md not modular': 't2_2',
|
||||||
|
'No path-scoped rules': 't2_3',
|
||||||
|
'Auto-memory explicitly disabled': 't2_4',
|
||||||
|
'Low hook diversity': 't2_5',
|
||||||
|
'No custom subagents': 't2_6',
|
||||||
|
'No model configuration': 't2_7',
|
||||||
|
'No status line configured': 't3_1',
|
||||||
|
'No custom keybindings': 't3_2',
|
||||||
|
'Using default output style': 't3_3',
|
||||||
|
'No worktree workflow': 't3_4',
|
||||||
|
'No advanced skill frontmatter': 't3_5',
|
||||||
|
'No subagent isolation': 't3_6',
|
||||||
|
'No dynamic skill context': 't3_7',
|
||||||
|
'No autoMode classifier': 't3_8',
|
||||||
|
'No project .mcp.json in git': 't4_1',
|
||||||
|
'No custom plugin': 't4_2',
|
||||||
|
'Agent teams not enabled': 't4_3',
|
||||||
|
'No managed settings': 't4_4',
|
||||||
|
'No LSP plugins': 't4_5',
|
||||||
|
};
|
||||||
|
|
||||||
|
// --- Segments ---
|
||||||
|
const SEGMENTS = [
|
||||||
|
{ min: 81, segment: 'Top Performer', description: 'Exceptional configuration — leveraging most of Claude Code\'s capabilities' },
|
||||||
|
{ min: 65, segment: 'Strong', description: 'Well-configured — using advanced features effectively' },
|
||||||
|
{ min: 45, segment: 'Competent', description: 'Solid foundation — room to leverage more features' },
|
||||||
|
{ min: 25, segment: 'Developing', description: 'Basic setup — significant features untapped' },
|
||||||
|
{ min: 0, segment: 'Beginner', description: 'Minimal configuration — most capabilities unused' },
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determine segment from utilization score.
|
||||||
|
* @param {number} score - 0-100
|
||||||
|
* @param {number} [_maturityLevel] - unused, kept for API compatibility
|
||||||
|
* @returns {{ segment: string, description: string }}
|
||||||
|
*/
|
||||||
|
export function determineSegment(score, _maturityLevel) {
|
||||||
|
for (const s of SEGMENTS) {
|
||||||
|
if (score >= s.min) return { segment: s.segment, description: s.description };
|
||||||
|
}
|
||||||
|
return SEGMENTS[SEGMENTS.length - 1];
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Area scoring ---
|
||||||
|
const SCANNER_AREA_MAP = {
|
||||||
|
CML: 'CLAUDE.md',
|
||||||
|
SET: 'Settings',
|
||||||
|
HKV: 'Hooks',
|
||||||
|
RUL: 'Rules',
|
||||||
|
MCP: 'MCP',
|
||||||
|
IMP: 'Imports',
|
||||||
|
CNF: 'Conflicts',
|
||||||
|
GAP: 'Feature Coverage',
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Score per config area from scanner results.
|
||||||
|
* @param {object[]} scannerResults - Array of scanner result objects from envelope.scanners
|
||||||
|
* @returns {{ areas: Array<{ name: string, grade: string, score: number, findingCount: number }>, overallGrade: string }}
|
||||||
|
*/
|
||||||
|
export function scoreByArea(scannerResults) {
|
||||||
|
const areas = [];
|
||||||
|
|
||||||
|
for (const result of scannerResults) {
|
||||||
|
const name = SCANNER_AREA_MAP[result.scanner] || result.scanner;
|
||||||
|
const findingCount = result.findings.length;
|
||||||
|
|
||||||
|
let score;
|
||||||
|
if (result.scanner === 'GAP') {
|
||||||
|
// Feature coverage: utilization-based
|
||||||
|
const util = calculateUtilization(result.findings);
|
||||||
|
score = util.score;
|
||||||
|
} else {
|
||||||
|
// Quality-based: fewer findings = higher pass rate
|
||||||
|
// Use a reasonable max checks per scanner for pass rate
|
||||||
|
const maxChecks = Math.max(findingCount + 5, 10);
|
||||||
|
const passRate = ((maxChecks - findingCount) / maxChecks) * 100;
|
||||||
|
score = Math.round(passRate);
|
||||||
|
}
|
||||||
|
|
||||||
|
const grade = gradeFromPassRate(score);
|
||||||
|
areas.push({ name, grade, score, findingCount });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Overall grade: quality areas only (exclude GAP — feature coverage is informational, not a quality issue)
|
||||||
|
const qualityAreas = areas.filter(a => a.name !== 'Feature Coverage');
|
||||||
|
const totalScore = qualityAreas.reduce((sum, a) => sum + a.score, 0);
|
||||||
|
const avgScore = qualityAreas.length > 0 ? Math.round(totalScore / qualityAreas.length) : 0;
|
||||||
|
const overallGrade = gradeFromPassRate(avgScore);
|
||||||
|
|
||||||
|
return { areas, overallGrade };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Derive top 3 actions from GAP findings (T1 first, then T2).
|
||||||
|
* @param {object[]} gapFindings
|
||||||
|
* @returns {string[]}
|
||||||
|
*/
|
||||||
|
export function topActions(gapFindings) {
|
||||||
|
const tierOrder = ['t1', 't2', 't3', 't4'];
|
||||||
|
const sorted = [...gapFindings].sort(
|
||||||
|
(a, b) => tierOrder.indexOf(a.category) - tierOrder.indexOf(b.category),
|
||||||
|
);
|
||||||
|
return sorted.slice(0, 3).map(f => f.recommendation);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a terminal-friendly scorecard string (v2 format — kept for backward compat).
|
||||||
|
* @param {{ areas: Array<{ name: string, grade: string, score: number }>, overallGrade: string }} areaScores
|
||||||
|
* @param {{ score: number, overhang: number }} utilization
|
||||||
|
* @param {{ level: number, name: string }} maturity
|
||||||
|
* @param {{ segment: string }} segment
|
||||||
|
* @param {string[]} actions
|
||||||
|
* @returns {string}
|
||||||
|
* @deprecated Use generateHealthScorecard for v3+ terminal output
|
||||||
|
*/
|
||||||
|
export function generateScorecard(areaScores, utilization, maturity, segment, actions) {
|
||||||
|
// Bug fix: exclude GAP from displayed avgScore (was inconsistent with overallGrade)
|
||||||
|
const qualityAreas = areaScores.areas.filter(a => a.name !== 'Feature Coverage');
|
||||||
|
const avgScore = qualityAreas.length > 0
|
||||||
|
? Math.round(qualityAreas.reduce((s, a) => s + a.score, 0) / qualityAreas.length)
|
||||||
|
: 0;
|
||||||
|
|
||||||
|
const lines = [];
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push(' Config-Audit Posture Score');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(` Overall: ${areaScores.overallGrade} (${avgScore}/100) Maturity: Level ${maturity.level} (${maturity.name})`);
|
||||||
|
lines.push(` Segment: ${segment.segment} Utilization: ${utilization.score}%`);
|
||||||
|
lines.push('');
|
||||||
|
lines.push(' Area Scores');
|
||||||
|
lines.push(' ───────────');
|
||||||
|
|
||||||
|
// Format areas in 2-column layout
|
||||||
|
const areas = areaScores.areas;
|
||||||
|
for (let i = 0; i < areas.length; i += 2) {
|
||||||
|
const left = areas[i];
|
||||||
|
const right = areas[i + 1];
|
||||||
|
const leftStr = ` ${left.name} ${'.'.repeat(Math.max(1, 20 - left.name.length))} ${left.grade} (${left.score})`;
|
||||||
|
if (right) {
|
||||||
|
const rightStr = `${right.name} ${'.'.repeat(Math.max(1, 20 - right.name.length))} ${right.grade} (${right.score})`;
|
||||||
|
lines.push(`${leftStr.padEnd(35)}${rightStr}`);
|
||||||
|
} else {
|
||||||
|
lines.push(leftStr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (actions.length > 0) {
|
||||||
|
lines.push('');
|
||||||
|
lines.push(' Top 3 Actions');
|
||||||
|
lines.push(' ─────────────');
|
||||||
|
for (let i = 0; i < actions.length; i++) {
|
||||||
|
lines.push(` ${i + 1}. ${actions[i]}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generate a v3 health-focused terminal scorecard.
|
||||||
|
* Shows only the 7 quality areas — no utilization, maturity, or segment.
|
||||||
|
* @param {{ areas: Array<{ name: string, grade: string, score: number }>, overallGrade: string }} areaScores
|
||||||
|
* @param {number} opportunityCount - Number of GAP findings (shown as opportunity count)
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function generateHealthScorecard(areaScores, opportunityCount) {
|
||||||
|
const qualityAreas = areaScores.areas.filter(a => a.name !== 'Feature Coverage');
|
||||||
|
const avgScore = qualityAreas.length > 0
|
||||||
|
? Math.round(qualityAreas.reduce((s, a) => s + a.score, 0) / qualityAreas.length)
|
||||||
|
: 0;
|
||||||
|
|
||||||
|
const lines = [];
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push(' Config-Audit Health Score');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(` Health: ${areaScores.overallGrade} (${avgScore}/100) ${qualityAreas.length} areas scanned`);
|
||||||
|
lines.push('');
|
||||||
|
lines.push(' Area Scores');
|
||||||
|
lines.push(' ───────────');
|
||||||
|
|
||||||
|
// Format areas in 2-column layout (quality areas only)
|
||||||
|
for (let i = 0; i < qualityAreas.length; i += 2) {
|
||||||
|
const left = qualityAreas[i];
|
||||||
|
const right = qualityAreas[i + 1];
|
||||||
|
const leftStr = ` ${left.name} ${'.'.repeat(Math.max(1, 20 - left.name.length))} ${left.grade} (${left.score})`;
|
||||||
|
if (right) {
|
||||||
|
const rightStr = `${right.name} ${'.'.repeat(Math.max(1, 20 - right.name.length))} ${right.grade} (${right.score})`;
|
||||||
|
lines.push(`${leftStr.padEnd(35)}${rightStr}`);
|
||||||
|
} else {
|
||||||
|
lines.push(leftStr);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (opportunityCount > 0) {
|
||||||
|
lines.push('');
|
||||||
|
lines.push(` ${opportunityCount} ${opportunityCount === 1 ? 'opportunity' : 'opportunities'} available — run /config-audit feature-gap for recommendations`);
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
export { TITLE_TO_ID, TIER_WEIGHTS, TIER_COUNTS, MAX_WEIGHTED, MATURITY_LEVELS, SEGMENTS };
|
||||||
75
plugins/config-audit/scanners/lib/severity.mjs
Normal file
75
plugins/config-audit/scanners/lib/severity.mjs
Normal file
|
|
@ -0,0 +1,75 @@
|
||||||
|
/**
|
||||||
|
* Severity constants, risk scoring, and verdict logic for config-audit scanners.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
export const SEVERITY = Object.freeze({
|
||||||
|
critical: 'critical',
|
||||||
|
high: 'high',
|
||||||
|
medium: 'medium',
|
||||||
|
low: 'low',
|
||||||
|
info: 'info',
|
||||||
|
});
|
||||||
|
|
||||||
|
const WEIGHTS = { critical: 25, high: 10, medium: 4, low: 1, info: 0 };
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Calculate a 0-100 risk score from severity counts.
|
||||||
|
* @param {{ critical?: number, high?: number, medium?: number, low?: number, info?: number }} counts
|
||||||
|
* @returns {number}
|
||||||
|
*/
|
||||||
|
export function riskScore(counts) {
|
||||||
|
let score = 0;
|
||||||
|
for (const [sev, weight] of Object.entries(WEIGHTS)) {
|
||||||
|
score += (counts[sev] || 0) * weight;
|
||||||
|
}
|
||||||
|
return Math.min(score, 100);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determine overall verdict from severity counts.
|
||||||
|
* @param {{ critical?: number, high?: number, medium?: number, low?: number, info?: number }} counts
|
||||||
|
* @returns {'FAIL' | 'WARNING' | 'PASS'}
|
||||||
|
*/
|
||||||
|
export function verdict(counts) {
|
||||||
|
const score = riskScore(counts);
|
||||||
|
if ((counts.critical || 0) >= 1 || score >= 61) return 'FAIL';
|
||||||
|
if ((counts.high || 0) >= 1 || score >= 21) return 'WARNING';
|
||||||
|
return 'PASS';
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Map a risk score to a human-readable band.
|
||||||
|
* @param {number} score
|
||||||
|
* @returns {'Low' | 'Medium' | 'High' | 'Critical' | 'Extreme'}
|
||||||
|
*/
|
||||||
|
export function riskBand(score) {
|
||||||
|
if (score <= 10) return 'Low';
|
||||||
|
if (score <= 30) return 'Medium';
|
||||||
|
if (score <= 60) return 'High';
|
||||||
|
if (score <= 80) return 'Critical';
|
||||||
|
return 'Extreme';
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Grade from a quality pass rate (0-100%).
|
||||||
|
* @param {number} passRate - 0-100
|
||||||
|
* @returns {'A' | 'B' | 'C' | 'D' | 'F'}
|
||||||
|
*/
|
||||||
|
export function gradeFromPassRate(passRate) {
|
||||||
|
if (passRate >= 90) return 'A';
|
||||||
|
if (passRate >= 75) return 'B';
|
||||||
|
if (passRate >= 60) return 'C';
|
||||||
|
if (passRate >= 40) return 'D';
|
||||||
|
return 'F';
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Config audit quality categories */
|
||||||
|
export const QUALITY_CATEGORIES = Object.freeze({
|
||||||
|
STRUCTURE: 'Structure & Format',
|
||||||
|
CONTENT: 'Content Quality',
|
||||||
|
HIERARCHY: 'Hierarchy & Scope',
|
||||||
|
SECURITY: 'Security',
|
||||||
|
FEATURES: 'Feature Utilization',
|
||||||
|
COHERENCE: 'Cross-file Coherence',
|
||||||
|
});
|
||||||
74
plugins/config-audit/scanners/lib/string-utils.mjs
Normal file
74
plugins/config-audit/scanners/lib/string-utils.mjs
Normal file
|
|
@ -0,0 +1,74 @@
|
||||||
|
/**
|
||||||
|
* String utilities for config-audit scanners.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Count lines in a string.
|
||||||
|
* @param {string} s
|
||||||
|
* @returns {number}
|
||||||
|
*/
|
||||||
|
export function lineCount(s) {
|
||||||
|
if (!s) return 0;
|
||||||
|
return s.split('\n').length;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Truncate a string to maxLen chars with ellipsis.
|
||||||
|
* @param {string} s
|
||||||
|
* @param {number} [maxLen=100]
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function truncate(s, maxLen = 100) {
|
||||||
|
if (!s || s.length <= maxLen) return s || '';
|
||||||
|
return s.slice(0, maxLen - 3) + '...';
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if two strings have >threshold% content similarity (word overlap).
|
||||||
|
* @param {string} a
|
||||||
|
* @param {string} b
|
||||||
|
* @param {number} [threshold=0.8]
|
||||||
|
* @returns {boolean}
|
||||||
|
*/
|
||||||
|
export function isSimilar(a, b, threshold = 0.8) {
|
||||||
|
const wordsA = new Set(a.toLowerCase().split(/\s+/).filter(w => w.length > 2));
|
||||||
|
const wordsB = new Set(b.toLowerCase().split(/\s+/).filter(w => w.length > 2));
|
||||||
|
if (wordsA.size === 0 || wordsB.size === 0) return false;
|
||||||
|
let overlap = 0;
|
||||||
|
for (const w of wordsA) {
|
||||||
|
if (wordsB.has(w)) overlap++;
|
||||||
|
}
|
||||||
|
const similarity = overlap / Math.min(wordsA.size, wordsB.size);
|
||||||
|
return similarity >= threshold;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract all key-like patterns from a settings.json or similar config.
|
||||||
|
* @param {object} obj
|
||||||
|
* @param {string} [prefix='']
|
||||||
|
* @returns {string[]}
|
||||||
|
*/
|
||||||
|
export function extractKeys(obj, prefix = '') {
|
||||||
|
const keys = [];
|
||||||
|
for (const [key, value] of Object.entries(obj)) {
|
||||||
|
const fullKey = prefix ? `${prefix}.${key}` : key;
|
||||||
|
keys.push(fullKey);
|
||||||
|
if (value && typeof value === 'object' && !Array.isArray(value)) {
|
||||||
|
keys.push(...extractKeys(value, fullKey));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return keys;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Normalize a file path for comparison (resolve ~, handle trailing slashes).
|
||||||
|
* @param {string} p
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function normalizePath(p) {
|
||||||
|
const home = process.env.HOME || process.env.USERPROFILE || '';
|
||||||
|
let normalized = p.replace(/^~/, home);
|
||||||
|
normalized = normalized.replace(/[/\\]+$/, '');
|
||||||
|
return normalized;
|
||||||
|
}
|
||||||
154
plugins/config-audit/scanners/lib/suppression.mjs
Normal file
154
plugins/config-audit/scanners/lib/suppression.mjs
Normal file
|
|
@ -0,0 +1,154 @@
|
||||||
|
/**
|
||||||
|
* Suppression engine for config-audit.
|
||||||
|
* Lets users suppress known false positives via .config-audit-ignore files.
|
||||||
|
* Supports exact IDs (CA-CML-001) and glob patterns (CA-SET-*).
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readFile } from 'node:fs/promises';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
import { homedir } from 'node:os';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Load suppressions from .config-audit-ignore files.
|
||||||
|
* Searches targetPath first, then ~/.claude/config-audit/.
|
||||||
|
* Project-level file takes precedence (loaded first).
|
||||||
|
* @param {string} targetPath - Project root to search
|
||||||
|
* @returns {Promise<{ suppressions: Array<{ pattern: string, comment: string }>, source: string }>}
|
||||||
|
*/
|
||||||
|
export async function loadSuppressions(targetPath) {
|
||||||
|
const sources = [
|
||||||
|
{ path: join(targetPath, '.config-audit-ignore'), label: 'project' },
|
||||||
|
{ path: join(homedir(), '.config-audit', '.config-audit-ignore'), label: 'global' },
|
||||||
|
];
|
||||||
|
|
||||||
|
for (const src of sources) {
|
||||||
|
try {
|
||||||
|
const content = await readFile(src.path, 'utf-8');
|
||||||
|
const suppressions = parseIgnoreFile(content);
|
||||||
|
return { suppressions, source: src.label };
|
||||||
|
} catch {
|
||||||
|
// File doesn't exist — try next
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { suppressions: [], source: 'none' };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse a .config-audit-ignore file into suppression entries.
|
||||||
|
* @param {string} content - File content
|
||||||
|
* @returns {Array<{ pattern: string, comment: string }>}
|
||||||
|
*/
|
||||||
|
export function parseIgnoreFile(content) {
|
||||||
|
const suppressions = [];
|
||||||
|
|
||||||
|
for (const rawLine of content.split('\n')) {
|
||||||
|
const line = rawLine.trim();
|
||||||
|
|
||||||
|
// Skip empty lines and comment-only lines
|
||||||
|
if (!line || line.startsWith('#')) continue;
|
||||||
|
|
||||||
|
// Split on first # for inline comment
|
||||||
|
const hashIdx = line.indexOf('#');
|
||||||
|
let pattern, comment;
|
||||||
|
if (hashIdx > 0) {
|
||||||
|
pattern = line.slice(0, hashIdx).trim();
|
||||||
|
comment = line.slice(hashIdx + 1).trim();
|
||||||
|
} else {
|
||||||
|
pattern = line;
|
||||||
|
comment = '';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate pattern looks like a finding ID or glob
|
||||||
|
if (/^CA-[A-Z]{2,4}[-*\d]+/.test(pattern) || /^CA-[A-Z]{2,4}-\*$/.test(pattern)) {
|
||||||
|
suppressions.push({ pattern, comment });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return suppressions;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Apply suppressions to a findings array.
|
||||||
|
* @param {object[]} findings - Array of finding objects with .id
|
||||||
|
* @param {Array<{ pattern: string, comment: string }>} suppressions
|
||||||
|
* @returns {{ active: object[], suppressed: object[] }}
|
||||||
|
*/
|
||||||
|
export function applySuppressions(findings, suppressions) {
|
||||||
|
if (!suppressions || suppressions.length === 0) {
|
||||||
|
return { active: [...findings], suppressed: [] };
|
||||||
|
}
|
||||||
|
|
||||||
|
const active = [];
|
||||||
|
const suppressed = [];
|
||||||
|
|
||||||
|
for (const f of findings) {
|
||||||
|
if (isMatchedByAny(f.id, suppressions)) {
|
||||||
|
suppressed.push(f);
|
||||||
|
} else {
|
||||||
|
active.push(f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { active, suppressed };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a finding ID matches any suppression pattern.
|
||||||
|
* @param {string} id - Finding ID (e.g. CA-CML-001)
|
||||||
|
* @param {Array<{ pattern: string }>} suppressions
|
||||||
|
* @returns {boolean}
|
||||||
|
*/
|
||||||
|
function isMatchedByAny(id, suppressions) {
|
||||||
|
for (const s of suppressions) {
|
||||||
|
if (matchPattern(id, s.pattern)) return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Match a finding ID against a suppression pattern.
|
||||||
|
* Supports exact match and glob-style CA-XXX-* patterns.
|
||||||
|
* @param {string} id - e.g. "CA-CML-001"
|
||||||
|
* @param {string} pattern - e.g. "CA-CML-001" or "CA-CML-*"
|
||||||
|
* @returns {boolean}
|
||||||
|
*/
|
||||||
|
function matchPattern(id, pattern) {
|
||||||
|
// Exact match
|
||||||
|
if (id === pattern) return true;
|
||||||
|
|
||||||
|
// Glob: CA-XXX-* matches any CA-XXX-NNN
|
||||||
|
if (pattern.endsWith('-*')) {
|
||||||
|
const prefix = pattern.slice(0, -1); // "CA-XXX-"
|
||||||
|
return id.startsWith(prefix);
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Format a human-readable suppression summary line.
|
||||||
|
* @param {object[]} suppressed - Array of suppressed findings
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function formatSuppressionSummary(suppressed) {
|
||||||
|
if (!suppressed || suppressed.length === 0) {
|
||||||
|
return '0 findings suppressed';
|
||||||
|
}
|
||||||
|
|
||||||
|
// Group by scanner prefix pattern
|
||||||
|
const groups = new Map();
|
||||||
|
for (const f of suppressed) {
|
||||||
|
// Extract prefix: CA-CML-001 → CA-CML
|
||||||
|
const prefix = f.id.replace(/-\d+$/, '');
|
||||||
|
groups.set(prefix, (groups.get(prefix) || 0) + 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const parts = [];
|
||||||
|
for (const [prefix, count] of groups) {
|
||||||
|
parts.push(`${count} \u00d7 ${prefix}-*`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return `${suppressed.length} finding(s) suppressed (${parts.join(', ')})`;
|
||||||
|
}
|
||||||
182
plugins/config-audit/scanners/lib/yaml-parser.mjs
Normal file
182
plugins/config-audit/scanners/lib/yaml-parser.mjs
Normal file
|
|
@ -0,0 +1,182 @@
|
||||||
|
/**
|
||||||
|
* Regex-based YAML frontmatter parser for Claude Code .md files.
|
||||||
|
* Handles YAML frontmatter (--- delimited) and basic YAML parsing.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse YAML frontmatter from markdown content.
|
||||||
|
* @param {string} content
|
||||||
|
* @returns {{ frontmatter: object | null, body: string, bodyStartLine: number }}
|
||||||
|
*/
|
||||||
|
export function parseFrontmatter(content) {
|
||||||
|
const match = content.match(/^---\r?\n([\s\S]*?)(?:\r?\n)?---(?:\r?\n|$)/);
|
||||||
|
if (!match) {
|
||||||
|
return { frontmatter: null, body: content, bodyStartLine: 1 };
|
||||||
|
}
|
||||||
|
|
||||||
|
const raw = match[1];
|
||||||
|
const bodyStartLine = raw.split('\n').length + 3; // 2 for --- lines + 1-based
|
||||||
|
const body = content.slice(match[0].length);
|
||||||
|
const frontmatter = parseSimpleYaml(raw);
|
||||||
|
|
||||||
|
return { frontmatter, body, bodyStartLine };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse simple YAML key-value pairs (no nesting beyond arrays).
|
||||||
|
* @param {string} yaml
|
||||||
|
* @returns {object}
|
||||||
|
*/
|
||||||
|
export function parseSimpleYaml(yaml) {
|
||||||
|
const result = {};
|
||||||
|
const lines = yaml.split('\n');
|
||||||
|
let currentKey = null;
|
||||||
|
let multiLineValue = '';
|
||||||
|
let inMultiLine = false;
|
||||||
|
|
||||||
|
for (const line of lines) {
|
||||||
|
// Skip comments and empty lines
|
||||||
|
if (line.trim().startsWith('#') || line.trim() === '') {
|
||||||
|
if (inMultiLine) multiLineValue += '\n';
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Key-value pair
|
||||||
|
const kvMatch = line.match(/^(\w[\w-]*):\s*(.*)/);
|
||||||
|
if (kvMatch && !inMultiLine) {
|
||||||
|
if (currentKey && multiLineValue) {
|
||||||
|
result[normalizeKey(currentKey)] = multiLineValue.trim();
|
||||||
|
}
|
||||||
|
|
||||||
|
currentKey = kvMatch[1];
|
||||||
|
const value = kvMatch[2].trim();
|
||||||
|
|
||||||
|
if (value === '|' || value === '>') {
|
||||||
|
inMultiLine = true;
|
||||||
|
multiLineValue = '';
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
result[normalizeKey(currentKey)] = parseValue(value);
|
||||||
|
currentKey = null;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Multi-line continuation
|
||||||
|
if (inMultiLine) {
|
||||||
|
if (line.match(/^\s+/)) {
|
||||||
|
multiLineValue += (multiLineValue ? '\n' : '') + line.trim();
|
||||||
|
} else {
|
||||||
|
result[normalizeKey(currentKey)] = multiLineValue.trim();
|
||||||
|
inMultiLine = false;
|
||||||
|
multiLineValue = '';
|
||||||
|
// Re-process this line as a new key
|
||||||
|
const reMatch = line.match(/^(\w[\w-]*):\s*(.*)/);
|
||||||
|
if (reMatch) {
|
||||||
|
currentKey = reMatch[1];
|
||||||
|
result[normalizeKey(currentKey)] = parseValue(reMatch[2].trim());
|
||||||
|
currentKey = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Flush remaining multi-line
|
||||||
|
if (inMultiLine && currentKey) {
|
||||||
|
result[normalizeKey(currentKey)] = multiLineValue.trim();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize arrays for known list fields
|
||||||
|
for (const field of ['allowed_tools', 'tools', 'paths', 'globs']) {
|
||||||
|
if (typeof result[field] === 'string') {
|
||||||
|
result[field] = result[field].split(',').map(s => s.trim()).filter(Boolean);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse a YAML value string.
|
||||||
|
*/
|
||||||
|
function parseValue(str) {
|
||||||
|
if (str === '' || str === '~' || str === 'null') return null;
|
||||||
|
if (str === 'true') return true;
|
||||||
|
if (str === 'false') return false;
|
||||||
|
if (/^\d+$/.test(str)) return parseInt(str, 10);
|
||||||
|
if (/^\d+\.\d+$/.test(str)) return parseFloat(str);
|
||||||
|
|
||||||
|
// Inline array: [a, b, c]
|
||||||
|
if (str.startsWith('[') && str.endsWith(']')) {
|
||||||
|
return str.slice(1, -1).split(',').map(s => {
|
||||||
|
const v = s.trim();
|
||||||
|
return v.replace(/^["']|["']$/g, '');
|
||||||
|
}).filter(Boolean);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Quoted string
|
||||||
|
if ((str.startsWith('"') && str.endsWith('"')) || (str.startsWith("'") && str.endsWith("'"))) {
|
||||||
|
return str.slice(1, -1);
|
||||||
|
}
|
||||||
|
|
||||||
|
return str;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Normalize key: hyphens to underscores.
|
||||||
|
*/
|
||||||
|
function normalizeKey(key) {
|
||||||
|
return key.replace(/-/g, '_');
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Parse a JSON file content. Returns null on error.
|
||||||
|
* @param {string} content
|
||||||
|
* @returns {object | null}
|
||||||
|
*/
|
||||||
|
export function parseJson(content) {
|
||||||
|
try {
|
||||||
|
return JSON.parse(content);
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find @import references in CLAUDE.md content.
|
||||||
|
* @param {string} content
|
||||||
|
* @returns {{ path: string, line: number }[]}
|
||||||
|
*/
|
||||||
|
export function findImports(content) {
|
||||||
|
const imports = [];
|
||||||
|
const lines = content.split('\n');
|
||||||
|
for (let i = 0; i < lines.length; i++) {
|
||||||
|
const match = lines[i].match(/^@(.+)$/);
|
||||||
|
if (match) {
|
||||||
|
imports.push({ path: match[1].trim(), line: i + 1 });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return imports;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract markdown sections (## headings) from content.
|
||||||
|
* @param {string} content
|
||||||
|
* @returns {{ heading: string, level: number, line: number }[]}
|
||||||
|
*/
|
||||||
|
export function extractSections(content) {
|
||||||
|
const sections = [];
|
||||||
|
const lines = content.split('\n');
|
||||||
|
for (let i = 0; i < lines.length; i++) {
|
||||||
|
const match = lines[i].match(/^(#{1,6})\s+(.+)/);
|
||||||
|
if (match) {
|
||||||
|
sections.push({
|
||||||
|
heading: match[2].trim(),
|
||||||
|
level: match[1].length,
|
||||||
|
line: i + 1,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sections;
|
||||||
|
}
|
||||||
153
plugins/config-audit/scanners/mcp-config-validator.mjs
Normal file
153
plugins/config-audit/scanners/mcp-config-validator.mjs
Normal file
|
|
@ -0,0 +1,153 @@
|
||||||
|
/**
|
||||||
|
* MCP Scanner — MCP Configuration Validator
|
||||||
|
* Validates .mcp.json files: server types, trust levels, env vars, unknown fields.
|
||||||
|
* Finding IDs: CA-MCP-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readTextFile } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseJson } from './lib/yaml-parser.mjs';
|
||||||
|
import { truncate } from './lib/string-utils.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'MCP';
|
||||||
|
|
||||||
|
const VALID_SERVER_TYPES = new Set(['stdio', 'http', 'sse']);
|
||||||
|
const VALID_TRUST_LEVELS = new Set(['workspace', 'trusted', 'untrusted']);
|
||||||
|
const VALID_SERVER_FIELDS = new Set([
|
||||||
|
'type', 'command', 'args', 'env', 'url', 'headers', 'timeout', 'trust',
|
||||||
|
]);
|
||||||
|
|
||||||
|
const ENV_VAR_PATTERN = /\$\{([^}]+)\}/g;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan all .mcp.json files discovered.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const mcpFiles = discovery.files.filter(f => f.type === 'mcp-json');
|
||||||
|
const findings = [];
|
||||||
|
let filesScanned = 0;
|
||||||
|
|
||||||
|
if (mcpFiles.length === 0) {
|
||||||
|
return scannerResult(SCANNER, 'skipped', [], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const file of mcpFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
filesScanned++;
|
||||||
|
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (!parsed) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Invalid JSON in MCP config',
|
||||||
|
description: `${file.relPath}: Failed to parse as JSON.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Fix JSON syntax errors. Use a JSON validator to check the file.',
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const servers = parsed.mcpServers || parsed;
|
||||||
|
if (typeof servers !== 'object' || Array.isArray(servers)) continue;
|
||||||
|
|
||||||
|
for (const [name, config] of Object.entries(servers)) {
|
||||||
|
if (!config || typeof config !== 'object' || Array.isArray(config)) continue;
|
||||||
|
|
||||||
|
// Check server type
|
||||||
|
if (config.type && !VALID_SERVER_TYPES.has(config.type)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Unknown MCP server type',
|
||||||
|
description: `${file.relPath}: Server "${name}" has unknown type "${config.type}".`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `type: "${config.type}"`,
|
||||||
|
recommendation: `Use one of: stdio, http, sse. Got "${config.type}".`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// SSE → HTTP recommendation
|
||||||
|
if (config.type === 'sse') {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.info,
|
||||||
|
title: 'SSE server type — consider HTTP',
|
||||||
|
description: `${file.relPath}: Server "${name}" uses "sse" type. The "http" type is the current standard.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `type: "sse"`,
|
||||||
|
recommendation: 'Migrate from "sse" to "http" type for better compatibility.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check trust level
|
||||||
|
if (!config.trust) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Missing trust level',
|
||||||
|
description: `${file.relPath}: Server "${name}" has no trust level configured.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add "trust": "workspace"|"trusted"|"untrusted" to explicitly set the trust level.',
|
||||||
|
}));
|
||||||
|
} else if (!VALID_TRUST_LEVELS.has(config.trust)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Invalid trust level',
|
||||||
|
description: `${file.relPath}: Server "${name}" has invalid trust level "${config.trust}".`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `trust: "${config.trust}"`,
|
||||||
|
recommendation: 'Use one of: workspace, trusted, untrusted.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for env var references in args without env block
|
||||||
|
if (Array.isArray(config.args)) {
|
||||||
|
for (const arg of config.args) {
|
||||||
|
if (typeof arg !== 'string') continue;
|
||||||
|
let match;
|
||||||
|
ENV_VAR_PATTERN.lastIndex = 0;
|
||||||
|
while ((match = ENV_VAR_PATTERN.exec(arg)) !== null) {
|
||||||
|
const varName = match[1];
|
||||||
|
const hasEnvBlock = config.env && typeof config.env === 'object' && varName in config.env;
|
||||||
|
if (!hasEnvBlock) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Unreferenced env var in args',
|
||||||
|
description: `${file.relPath}: Server "${name}" references \${${varName}} in args but has no env block defining it.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: truncate(arg, 80),
|
||||||
|
recommendation: `Add an "env" block with "${varName}" or remove the variable reference.`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for unknown fields
|
||||||
|
for (const key of Object.keys(config)) {
|
||||||
|
if (!VALID_SERVER_FIELDS.has(key)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Unknown MCP server field',
|
||||||
|
description: `${file.relPath}: Server "${name}" has unknown field "${key}".`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${key}: ${truncate(JSON.stringify(config[key]), 60)}`,
|
||||||
|
recommendation: `Remove or correct "${key}". Valid fields: ${[...VALID_SERVER_FIELDS].join(', ')}.`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
455
plugins/config-audit/scanners/plugin-health-scanner.mjs
Normal file
455
plugins/config-audit/scanners/plugin-health-scanner.mjs
Normal file
|
|
@ -0,0 +1,455 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* PLH Scanner — Plugin Health
|
||||||
|
* Validates Claude Code plugin structure, frontmatter, and cross-plugin coherence.
|
||||||
|
* Finding IDs: CA-PLH-NNN
|
||||||
|
* NOT included in scan-orchestrator — runs independently on plugin directories.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readdir, stat, readFile } from 'node:fs/promises';
|
||||||
|
import { join, basename, resolve } from 'node:path';
|
||||||
|
import { finding, scannerResult, resetCounter } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseFrontmatter } from './lib/yaml-parser.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'PLH';
|
||||||
|
|
||||||
|
const REQUIRED_PLUGIN_JSON_FIELDS = ['name', 'description', 'version'];
|
||||||
|
const RECOMMENDED_CLAUDE_MD_SECTIONS = ['commands', 'agents', 'hooks'];
|
||||||
|
// Keys as they appear after yaml-parser normalizeKey (hyphens → underscores)
|
||||||
|
const REQUIRED_COMMAND_FRONTMATTER = [
|
||||||
|
{ key: 'name', display: 'name' },
|
||||||
|
{ key: 'description', display: 'description' },
|
||||||
|
{ key: 'model', display: 'model' },
|
||||||
|
{ key: 'allowed_tools', display: 'allowed-tools' },
|
||||||
|
];
|
||||||
|
const REQUIRED_AGENT_FRONTMATTER = [
|
||||||
|
{ key: 'name', display: 'name' },
|
||||||
|
{ key: 'description', display: 'description' },
|
||||||
|
{ key: 'model', display: 'model' },
|
||||||
|
{ key: 'tools', display: 'tools' },
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Discover plugins under a path.
|
||||||
|
* Looks for .claude-plugin/plugin.json pattern.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @returns {Promise<string[]>} Array of plugin root directories
|
||||||
|
*/
|
||||||
|
export async function discoverPlugins(targetPath) {
|
||||||
|
const plugins = [];
|
||||||
|
|
||||||
|
// Check if targetPath itself is a plugin
|
||||||
|
if (await isPlugin(targetPath)) {
|
||||||
|
plugins.push(targetPath);
|
||||||
|
return plugins;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Look for plugins in subdirectories (marketplace layout: plugins/<name>/)
|
||||||
|
try {
|
||||||
|
const entries = await readdir(targetPath, { withFileTypes: true });
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!entry.isDirectory()) continue;
|
||||||
|
const subDir = join(targetPath, entry.name);
|
||||||
|
if (await isPlugin(subDir)) {
|
||||||
|
plugins.push(subDir);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
// Also check one level deeper (plugins/<name>/ layout)
|
||||||
|
try {
|
||||||
|
const subEntries = await readdir(subDir, { withFileTypes: true });
|
||||||
|
for (const subEntry of subEntries) {
|
||||||
|
if (!subEntry.isDirectory()) continue;
|
||||||
|
const deepDir = join(subDir, subEntry.name);
|
||||||
|
if (await isPlugin(deepDir)) {
|
||||||
|
plugins.push(deepDir);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch { /* skip */ }
|
||||||
|
}
|
||||||
|
} catch { /* skip */ }
|
||||||
|
|
||||||
|
return plugins;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a directory is a Claude Code plugin.
|
||||||
|
* @param {string} dir
|
||||||
|
* @returns {Promise<boolean>}
|
||||||
|
*/
|
||||||
|
async function isPlugin(dir) {
|
||||||
|
try {
|
||||||
|
await stat(join(dir, '.claude-plugin', 'plugin.json'));
|
||||||
|
return true;
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan a single plugin for health issues.
|
||||||
|
* @param {string} pluginDir - Plugin root directory
|
||||||
|
* @returns {Promise<{ name: string, findings: object[], commandCount: number, agentCount: number }>}
|
||||||
|
*/
|
||||||
|
async function scanSinglePlugin(pluginDir) {
|
||||||
|
const findings = [];
|
||||||
|
const pluginName = basename(pluginDir);
|
||||||
|
let commandCount = 0;
|
||||||
|
let agentCount = 0;
|
||||||
|
|
||||||
|
// 1. Validate plugin.json
|
||||||
|
const pluginJsonPath = join(pluginDir, '.claude-plugin', 'plugin.json');
|
||||||
|
try {
|
||||||
|
const content = await readFile(pluginJsonPath, 'utf-8');
|
||||||
|
let parsed;
|
||||||
|
try {
|
||||||
|
parsed = JSON.parse(content);
|
||||||
|
} catch {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Invalid plugin.json',
|
||||||
|
description: `plugin.json is not valid JSON in ${pluginName}`,
|
||||||
|
file: pluginJsonPath,
|
||||||
|
}));
|
||||||
|
parsed = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (parsed) {
|
||||||
|
for (const field of REQUIRED_PLUGIN_JSON_FIELDS) {
|
||||||
|
if (!parsed[field]) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: `Missing required field in plugin.json: ${field}`,
|
||||||
|
description: `Plugin "${pluginName}" plugin.json is missing required field "${field}"`,
|
||||||
|
file: pluginJsonPath,
|
||||||
|
recommendation: `Add "${field}" to plugin.json`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Missing plugin.json',
|
||||||
|
description: `No .claude-plugin/plugin.json found in ${pluginName}`,
|
||||||
|
file: pluginDir,
|
||||||
|
recommendation: 'Create .claude-plugin/plugin.json with name, description, version',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Validate CLAUDE.md
|
||||||
|
const claudeMdPath = join(pluginDir, 'CLAUDE.md');
|
||||||
|
try {
|
||||||
|
const content = await readFile(claudeMdPath, 'utf-8');
|
||||||
|
const lower = content.toLowerCase();
|
||||||
|
|
||||||
|
for (const section of RECOMMENDED_CLAUDE_MD_SECTIONS) {
|
||||||
|
// Look for markdown table header or section header
|
||||||
|
const hasSection = lower.includes(`## ${section}`) ||
|
||||||
|
lower.includes(`| ${section}`) ||
|
||||||
|
lower.includes(`|${section}`);
|
||||||
|
if (!hasSection) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: `CLAUDE.md missing ${section} section`,
|
||||||
|
description: `Plugin "${pluginName}" CLAUDE.md should have a ${section} table or section`,
|
||||||
|
file: claudeMdPath,
|
||||||
|
recommendation: `Add a "## ${section.charAt(0).toUpperCase() + section.slice(1)}" section with a table`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Missing CLAUDE.md',
|
||||||
|
description: `Plugin "${pluginName}" has no CLAUDE.md`,
|
||||||
|
file: pluginDir,
|
||||||
|
recommendation: 'Create CLAUDE.md with Commands, Agents, and Hooks tables',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Validate commands frontmatter
|
||||||
|
const commandsDir = join(pluginDir, 'commands');
|
||||||
|
try {
|
||||||
|
const entries = await readdir(commandsDir);
|
||||||
|
const mdFiles = entries.filter(f => f.endsWith('.md'));
|
||||||
|
commandCount = mdFiles.length;
|
||||||
|
|
||||||
|
for (const file of mdFiles) {
|
||||||
|
const filePath = join(commandsDir, file);
|
||||||
|
const content = await readFile(filePath, 'utf-8');
|
||||||
|
const { frontmatter } = parseFrontmatter(content);
|
||||||
|
|
||||||
|
if (!frontmatter) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Command missing frontmatter',
|
||||||
|
description: `Command "${file}" in plugin "${pluginName}" has no frontmatter`,
|
||||||
|
file: filePath,
|
||||||
|
recommendation: 'Add YAML frontmatter with name, description, model',
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const { key, display } of REQUIRED_COMMAND_FRONTMATTER) {
|
||||||
|
if (!frontmatter[key]) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: `Command missing frontmatter field: ${display}`,
|
||||||
|
description: `Command "${file}" in plugin "${pluginName}" is missing "${display}" in frontmatter`,
|
||||||
|
file: filePath,
|
||||||
|
recommendation: `Add "${display}" to frontmatter`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch { /* no commands dir */ }
|
||||||
|
|
||||||
|
// 4. Validate agents frontmatter
|
||||||
|
const agentsDir = join(pluginDir, 'agents');
|
||||||
|
try {
|
||||||
|
const entries = await readdir(agentsDir);
|
||||||
|
const mdFiles = entries.filter(f => f.endsWith('.md'));
|
||||||
|
agentCount = mdFiles.length;
|
||||||
|
|
||||||
|
for (const file of mdFiles) {
|
||||||
|
const filePath = join(agentsDir, file);
|
||||||
|
const content = await readFile(filePath, 'utf-8');
|
||||||
|
const { frontmatter } = parseFrontmatter(content);
|
||||||
|
|
||||||
|
if (!frontmatter) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Agent missing frontmatter',
|
||||||
|
description: `Agent "${file}" in plugin "${pluginName}" has no frontmatter`,
|
||||||
|
file: filePath,
|
||||||
|
recommendation: 'Add YAML frontmatter with name, description, model, tools',
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const { key, display } of REQUIRED_AGENT_FRONTMATTER) {
|
||||||
|
if (!frontmatter[key]) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: `Agent missing frontmatter field: ${display}`,
|
||||||
|
description: `Agent "${file}" in plugin "${pluginName}" is missing "${display}" in frontmatter`,
|
||||||
|
file: filePath,
|
||||||
|
recommendation: `Add "${display}" to frontmatter`,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch { /* no agents dir */ }
|
||||||
|
|
||||||
|
// 5. Validate hooks.json (if exists)
|
||||||
|
const hooksJsonPath = join(pluginDir, 'hooks', 'hooks.json');
|
||||||
|
try {
|
||||||
|
const content = await readFile(hooksJsonPath, 'utf-8');
|
||||||
|
try {
|
||||||
|
const parsed = JSON.parse(content);
|
||||||
|
if (!parsed.hooks || typeof parsed.hooks !== 'object') {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Invalid hooks.json structure',
|
||||||
|
description: `hooks.json in "${pluginName}" missing "hooks" object`,
|
||||||
|
file: hooksJsonPath,
|
||||||
|
recommendation: 'hooks.json must have a "hooks" key with event-keyed object',
|
||||||
|
}));
|
||||||
|
} else if (Array.isArray(parsed.hooks)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'hooks.json uses array instead of object',
|
||||||
|
description: `hooks.json "hooks" in "${pluginName}" is an array — must be object with event keys`,
|
||||||
|
file: hooksJsonPath,
|
||||||
|
recommendation: 'Change hooks from array to object: { "PreToolUse": [...], ... }',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Invalid hooks.json',
|
||||||
|
description: `hooks.json is not valid JSON in "${pluginName}"`,
|
||||||
|
file: hooksJsonPath,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
} catch { /* no hooks.json — fine */ }
|
||||||
|
|
||||||
|
// 6. Check for unknown files in .claude-plugin/
|
||||||
|
const pluginMetaDir = join(pluginDir, '.claude-plugin');
|
||||||
|
try {
|
||||||
|
const entries = await readdir(pluginMetaDir);
|
||||||
|
const known = new Set(['plugin.json']);
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!known.has(entry)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Unknown file in .claude-plugin/',
|
||||||
|
description: `Unexpected file "${entry}" in .claude-plugin/ of "${pluginName}"`,
|
||||||
|
file: join(pluginMetaDir, entry),
|
||||||
|
recommendation: 'Only plugin.json should be in .claude-plugin/',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch { /* skip */ }
|
||||||
|
|
||||||
|
return { name: pluginName, findings, commandCount, agentCount };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan one or more plugins and return aggregated results.
|
||||||
|
* @param {string} targetPath - Plugin dir or marketplace root
|
||||||
|
* @returns {Promise<object>} Scanner result
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath) {
|
||||||
|
const start = Date.now();
|
||||||
|
resetCounter();
|
||||||
|
|
||||||
|
const pluginDirs = await discoverPlugins(resolve(targetPath));
|
||||||
|
|
||||||
|
if (pluginDirs.length === 0) {
|
||||||
|
return scannerResult(SCANNER, 'ok', [
|
||||||
|
finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.info,
|
||||||
|
title: 'No plugins found',
|
||||||
|
description: `No Claude Code plugins found under ${targetPath}`,
|
||||||
|
recommendation: 'Ensure plugins have .claude-plugin/plugin.json',
|
||||||
|
}),
|
||||||
|
], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
const allFindings = [];
|
||||||
|
const pluginResults = [];
|
||||||
|
|
||||||
|
for (const dir of pluginDirs) {
|
||||||
|
const result = await scanSinglePlugin(dir);
|
||||||
|
pluginResults.push(result);
|
||||||
|
allFindings.push(...result.findings);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cross-plugin checks: command name conflicts
|
||||||
|
const commandNames = new Map(); // name → plugin
|
||||||
|
for (let idx = 0; idx < pluginResults.length; idx++) {
|
||||||
|
const pr = pluginResults[idx];
|
||||||
|
const commandsDir = join(pluginDirs[idx], 'commands');
|
||||||
|
try {
|
||||||
|
const entries = await readdir(commandsDir);
|
||||||
|
for (const file of entries.filter(f => f.endsWith('.md'))) {
|
||||||
|
const filePath = join(commandsDir, file);
|
||||||
|
const content = await readFile(filePath, 'utf-8');
|
||||||
|
const { frontmatter } = parseFrontmatter(content);
|
||||||
|
if (frontmatter && frontmatter.name) {
|
||||||
|
const cmdName = frontmatter.name;
|
||||||
|
if (commandNames.has(cmdName)) {
|
||||||
|
allFindings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Cross-plugin command name conflict',
|
||||||
|
description: `Command "${cmdName}" exists in both "${commandNames.get(cmdName)}" and "${pr.name}"`,
|
||||||
|
file: filePath,
|
||||||
|
recommendation: `Rename one of the conflicting commands to avoid ambiguity`,
|
||||||
|
}));
|
||||||
|
} else {
|
||||||
|
commandNames.set(cmdName, pr.name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch { /* no commands dir */ }
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', allFindings, pluginDirs.length, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Format a plugin health report for terminal output.
|
||||||
|
* @param {object} scanResult - Scanner result from scan()
|
||||||
|
* @param {Array<{ name: string, findings: object[], commandCount: number, agentCount: number }>} pluginResults
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function formatPluginHealthReport(pluginResults, crossPluginFindings) {
|
||||||
|
const lines = [];
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push(' Plugin Health Report');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
for (const p of pluginResults) {
|
||||||
|
const issueCount = p.findings.length;
|
||||||
|
const score = Math.max(0, 100 - issueCount * 10);
|
||||||
|
const grade = score >= 90 ? 'A' : score >= 75 ? 'B' : score >= 60 ? 'C' : score >= 40 ? 'D' : 'F';
|
||||||
|
const padding = '.'.repeat(Math.max(1, 25 - p.name.length));
|
||||||
|
lines.push(` ${p.name} ${padding} ${grade} (${score}) ${p.commandCount} commands, ${p.agentCount} agents`);
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
if (crossPluginFindings.length > 0) {
|
||||||
|
lines.push(` Cross-plugin issues (${crossPluginFindings.length}):`);
|
||||||
|
for (const f of crossPluginFindings) {
|
||||||
|
lines.push(` - [${f.severity}] ${f.title}`);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
lines.push(' Cross-plugin issues (0):');
|
||||||
|
lines.push(' (none)');
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('');
|
||||||
|
lines.push('━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━');
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- CLI entry point ---
|
||||||
|
async function main() {
|
||||||
|
const args = process.argv.slice(2);
|
||||||
|
let targetPath = '.';
|
||||||
|
let jsonMode = false;
|
||||||
|
|
||||||
|
for (let i = 0; i < args.length; i++) {
|
||||||
|
if (args[i] === '--json') {
|
||||||
|
jsonMode = true;
|
||||||
|
} else if (!args[i].startsWith('-')) {
|
||||||
|
targetPath = args[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
process.stderr.write(`Plugin Health Scanner v2.1.0\n`);
|
||||||
|
process.stderr.write(`Target: ${resolve(targetPath)}\n\n`);
|
||||||
|
|
||||||
|
const result = await scan(targetPath);
|
||||||
|
|
||||||
|
if (jsonMode) {
|
||||||
|
process.stdout.write(JSON.stringify(result, null, 2) + '\n');
|
||||||
|
} else {
|
||||||
|
// Brief summary
|
||||||
|
const count = result.findings.length;
|
||||||
|
process.stderr.write(`Findings: ${count}\n`);
|
||||||
|
for (const f of result.findings) {
|
||||||
|
process.stderr.write(` [${f.severity}] ${f.title}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const isDirectRun = process.argv[1] && resolve(process.argv[1]) === resolve(new URL(import.meta.url).pathname);
|
||||||
|
if (isDirectRun) {
|
||||||
|
main().catch(err => {
|
||||||
|
process.stderr.write(`Fatal: ${err.message}\n`);
|
||||||
|
process.exit(3);
|
||||||
|
});
|
||||||
|
}
|
||||||
111
plugins/config-audit/scanners/posture.mjs
Normal file
111
plugins/config-audit/scanners/posture.mjs
Normal file
|
|
@ -0,0 +1,111 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Config-Audit Posture Assessment CLI
|
||||||
|
* Runs all scanners + scoring in a single Node.js process.
|
||||||
|
* Usage: node posture.mjs <target-path> [--json] [--global] [--output-file path]
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve } from 'node:path';
|
||||||
|
import { writeFile } from 'node:fs/promises';
|
||||||
|
import { runAllScanners } from './scan-orchestrator.mjs';
|
||||||
|
import {
|
||||||
|
calculateUtilization,
|
||||||
|
determineMaturityLevel,
|
||||||
|
determineSegment,
|
||||||
|
scoreByArea,
|
||||||
|
topActions,
|
||||||
|
generateScorecard,
|
||||||
|
generateHealthScorecard,
|
||||||
|
} from './lib/scoring.mjs';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run posture assessment and return structured result.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {boolean} [opts.includeGlobal=false]
|
||||||
|
* @param {boolean} [opts.fullMachine=false] - Scan all known locations across the machine
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function runPosture(targetPath, opts = {}) {
|
||||||
|
const envelope = await runAllScanners(targetPath, opts);
|
||||||
|
|
||||||
|
// Extract GAP scanner results
|
||||||
|
const gapScanner = envelope.scanners.find(s => s.scanner === 'GAP');
|
||||||
|
const gapFindings = gapScanner ? gapScanner.findings : [];
|
||||||
|
|
||||||
|
// Calculate scores
|
||||||
|
const utilization = calculateUtilization(gapFindings);
|
||||||
|
const maturity = determineMaturityLevel(gapFindings, { files: [] });
|
||||||
|
const segment = determineSegment(utilization.score);
|
||||||
|
const areaScores = scoreByArea(envelope.scanners);
|
||||||
|
const actions = topActions(gapFindings);
|
||||||
|
|
||||||
|
return {
|
||||||
|
utilization,
|
||||||
|
maturity,
|
||||||
|
segment,
|
||||||
|
areas: areaScores.areas,
|
||||||
|
overallGrade: areaScores.overallGrade,
|
||||||
|
topActions: actions,
|
||||||
|
opportunityCount: gapFindings.length,
|
||||||
|
scannerEnvelope: envelope,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- CLI entry point ---
|
||||||
|
async function main() {
|
||||||
|
const args = process.argv.slice(2);
|
||||||
|
let targetPath = '.';
|
||||||
|
let outputFile = null;
|
||||||
|
let jsonMode = false;
|
||||||
|
let includeGlobal = false;
|
||||||
|
let fullMachine = false;
|
||||||
|
|
||||||
|
for (let i = 0; i < args.length; i++) {
|
||||||
|
if (args[i] === '--output-file' && args[i + 1]) {
|
||||||
|
outputFile = args[++i];
|
||||||
|
} else if (args[i] === '--json') {
|
||||||
|
jsonMode = true;
|
||||||
|
} else if (args[i] === '--global') {
|
||||||
|
includeGlobal = true;
|
||||||
|
} else if (args[i] === '--full-machine') {
|
||||||
|
fullMachine = true;
|
||||||
|
} else if (args[i] === '--include-fixtures') {
|
||||||
|
// handled below
|
||||||
|
} else if (!args[i].startsWith('-')) {
|
||||||
|
targetPath = args[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const filterFixtures = !args.includes('--include-fixtures');
|
||||||
|
const result = await runPosture(targetPath, { includeGlobal, fullMachine, filterFixtures });
|
||||||
|
|
||||||
|
if (jsonMode) {
|
||||||
|
const json = JSON.stringify(result, null, 2);
|
||||||
|
process.stdout.write(json + '\n');
|
||||||
|
} else {
|
||||||
|
// Terminal scorecard (v3 health format)
|
||||||
|
const scorecard = generateHealthScorecard(
|
||||||
|
{ areas: result.areas, overallGrade: result.overallGrade },
|
||||||
|
result.opportunityCount,
|
||||||
|
);
|
||||||
|
process.stderr.write('\n' + scorecard + '\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (outputFile) {
|
||||||
|
const json = JSON.stringify(result, null, 2);
|
||||||
|
await writeFile(outputFile, json, 'utf-8');
|
||||||
|
process.stderr.write(`\nResults written to ${outputFile}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only run CLI if invoked directly
|
||||||
|
const isDirectRun = process.argv[1] && resolve(process.argv[1]) === resolve(new URL(import.meta.url).pathname);
|
||||||
|
if (isDirectRun) {
|
||||||
|
main().catch(err => {
|
||||||
|
process.stderr.write(`Fatal: ${err.message}\n`);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
}
|
||||||
166
plugins/config-audit/scanners/rollback-engine.mjs
Normal file
166
plugins/config-audit/scanners/rollback-engine.mjs
Normal file
|
|
@ -0,0 +1,166 @@
|
||||||
|
/**
|
||||||
|
* Config-Audit Rollback Engine
|
||||||
|
* Restores configuration from backup with checksum verification.
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readFile, writeFile, readdir, stat, rm } from 'node:fs/promises';
|
||||||
|
import { join } from 'node:path';
|
||||||
|
import { getBackupDir, parseManifest, checksum } from './lib/backup.mjs';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* List all available backups.
|
||||||
|
* @returns {Promise<{ backups: object[] }>}
|
||||||
|
*/
|
||||||
|
export async function listBackups() {
|
||||||
|
const backupRoot = getBackupDir();
|
||||||
|
const backups = [];
|
||||||
|
|
||||||
|
let entries;
|
||||||
|
try {
|
||||||
|
entries = await readdir(backupRoot, { withFileTypes: true });
|
||||||
|
} catch {
|
||||||
|
return { backups: [] };
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!entry.isDirectory()) continue;
|
||||||
|
|
||||||
|
const backupPath = join(backupRoot, entry.name);
|
||||||
|
const manifestPath = join(backupPath, 'manifest.yaml');
|
||||||
|
|
||||||
|
try {
|
||||||
|
const manifestContent = await readFile(manifestPath, 'utf-8');
|
||||||
|
const manifest = parseManifest(manifestContent);
|
||||||
|
|
||||||
|
backups.push({
|
||||||
|
id: entry.name,
|
||||||
|
createdAt: manifest.created_at,
|
||||||
|
files: manifest.files.map(f => ({
|
||||||
|
originalPath: f.originalPath,
|
||||||
|
backupPath: f.backupPath,
|
||||||
|
checksum: f.checksum,
|
||||||
|
sizeBytes: f.sizeBytes,
|
||||||
|
})),
|
||||||
|
});
|
||||||
|
} catch {
|
||||||
|
// Skip backups without valid manifest
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort newest first
|
||||||
|
backups.sort((a, b) => b.id.localeCompare(a.id));
|
||||||
|
|
||||||
|
return { backups };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Restore files from a backup.
|
||||||
|
* @param {string} backupId
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {boolean} [opts.dryRun=false]
|
||||||
|
* @param {boolean} [opts.verify=true]
|
||||||
|
* @returns {Promise<{ restored: object[], failed: object[] }>}
|
||||||
|
*/
|
||||||
|
export async function restoreBackup(backupId, opts = {}) {
|
||||||
|
const verify = opts.verify !== false;
|
||||||
|
const backupRoot = getBackupDir();
|
||||||
|
const backupPath = join(backupRoot, backupId);
|
||||||
|
const manifestPath = join(backupPath, 'manifest.yaml');
|
||||||
|
|
||||||
|
// Read manifest
|
||||||
|
let manifestContent;
|
||||||
|
try {
|
||||||
|
manifestContent = await readFile(manifestPath, 'utf-8');
|
||||||
|
} catch {
|
||||||
|
throw new Error(`Backup not found: ${backupId}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const manifest = parseManifest(manifestContent);
|
||||||
|
const restored = [];
|
||||||
|
const failed = [];
|
||||||
|
|
||||||
|
for (const fileEntry of manifest.files) {
|
||||||
|
const backupFilePath = join(backupPath, fileEntry.backupPath);
|
||||||
|
|
||||||
|
if (opts.dryRun) {
|
||||||
|
restored.push({
|
||||||
|
originalPath: fileEntry.originalPath,
|
||||||
|
status: 'dry-run',
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Read backup file
|
||||||
|
const content = await readFile(backupFilePath);
|
||||||
|
|
||||||
|
// Verify checksum before restoring
|
||||||
|
if (verify) {
|
||||||
|
const hash = checksum(content);
|
||||||
|
if (hash !== fileEntry.checksum) {
|
||||||
|
failed.push({
|
||||||
|
originalPath: fileEntry.originalPath,
|
||||||
|
status: 'checksum-mismatch',
|
||||||
|
error: `Expected ${fileEntry.checksum}, got ${hash}`,
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to original path
|
||||||
|
await writeFile(fileEntry.originalPath, content);
|
||||||
|
|
||||||
|
// Verify after write
|
||||||
|
if (verify) {
|
||||||
|
const written = await readFile(fileEntry.originalPath);
|
||||||
|
const writtenHash = checksum(written);
|
||||||
|
if (writtenHash !== fileEntry.checksum) {
|
||||||
|
failed.push({
|
||||||
|
originalPath: fileEntry.originalPath,
|
||||||
|
status: 'checksum-mismatch',
|
||||||
|
error: 'Checksum mismatch after write',
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
restored.push({
|
||||||
|
originalPath: fileEntry.originalPath,
|
||||||
|
status: 'restored',
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
failed.push({
|
||||||
|
originalPath: fileEntry.originalPath,
|
||||||
|
status: 'failed',
|
||||||
|
error: err.message,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { restored, failed };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Delete a backup directory.
|
||||||
|
* @param {string} backupId
|
||||||
|
* @returns {Promise<{ deleted: boolean, error?: string }>}
|
||||||
|
*/
|
||||||
|
export async function deleteBackup(backupId) {
|
||||||
|
const backupRoot = getBackupDir();
|
||||||
|
const backupPath = join(backupRoot, backupId);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await stat(backupPath);
|
||||||
|
} catch {
|
||||||
|
return { deleted: false, error: `Backup not found: ${backupId}` };
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
await rm(backupPath, { recursive: true, force: true });
|
||||||
|
return { deleted: true };
|
||||||
|
} catch (err) {
|
||||||
|
return { deleted: false, error: err.message };
|
||||||
|
}
|
||||||
|
}
|
||||||
217
plugins/config-audit/scanners/rules-validator.mjs
Normal file
217
plugins/config-audit/scanners/rules-validator.mjs
Normal file
|
|
@ -0,0 +1,217 @@
|
||||||
|
/**
|
||||||
|
* RUL Scanner — Rules Validator
|
||||||
|
* Validates .claude/rules/ files: glob matching against real files, orphan detection, frontmatter.
|
||||||
|
* Finding IDs: CA-RUL-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readTextFile } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseFrontmatter } from './lib/yaml-parser.mjs';
|
||||||
|
import { lineCount, truncate } from './lib/string-utils.mjs';
|
||||||
|
import { readdir, stat } from 'node:fs/promises';
|
||||||
|
import { join, resolve, relative } from 'node:path';
|
||||||
|
|
||||||
|
const SCANNER = 'RUL';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan .claude/rules/ directories for issues.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const ruleFiles = discovery.files.filter(f => f.type === 'rule');
|
||||||
|
const findings = [];
|
||||||
|
let filesScanned = 0;
|
||||||
|
|
||||||
|
if (ruleFiles.length === 0) {
|
||||||
|
return scannerResult(SCANNER, 'skipped', [], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect all real files in the project for glob matching
|
||||||
|
const projectFiles = await collectProjectFiles(targetPath);
|
||||||
|
|
||||||
|
for (const file of ruleFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
filesScanned++;
|
||||||
|
|
||||||
|
const { frontmatter, body, bodyStartLine } = parseFrontmatter(content);
|
||||||
|
const lines = lineCount(content);
|
||||||
|
|
||||||
|
// --- Frontmatter checks ---
|
||||||
|
if (!frontmatter) {
|
||||||
|
// Rules without frontmatter are "always on" — not necessarily wrong, just note it
|
||||||
|
if (lines > 5) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.info,
|
||||||
|
title: 'Rule has no frontmatter (always active)',
|
||||||
|
description: `${file.relPath} has no YAML frontmatter. It will be loaded for ALL files. Add paths: frontmatter to scope it.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add frontmatter with paths: to limit when this rule applies.',
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Check for paths/globs frontmatter
|
||||||
|
const paths = frontmatter.paths || frontmatter.globs;
|
||||||
|
|
||||||
|
if (frontmatter.globs && !frontmatter.paths) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Rule uses deprecated "globs" field',
|
||||||
|
description: `${file.relPath} uses "globs:" which is legacy. Use "paths:" instead.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `globs: ${JSON.stringify(frontmatter.globs)}`,
|
||||||
|
recommendation: 'Rename "globs:" to "paths:" in frontmatter.',
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (paths) {
|
||||||
|
const patterns = Array.isArray(paths) ? paths : [paths];
|
||||||
|
|
||||||
|
for (const pattern of patterns) {
|
||||||
|
if (typeof pattern !== 'string') continue;
|
||||||
|
|
||||||
|
// Check if pattern matches any real files
|
||||||
|
const matchCount = countGlobMatches(pattern, projectFiles, targetPath);
|
||||||
|
if (matchCount === 0) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Rule path pattern matches no files',
|
||||||
|
description: `${file.relPath}: pattern "${pattern}" matches 0 files. This rule will never activate.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `paths: "${pattern}"`,
|
||||||
|
recommendation: 'Check the glob pattern. Common issues: wrong directory name, missing **, incorrect extension.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Content quality checks ---
|
||||||
|
if (lines < 2) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'Rule file is nearly empty',
|
||||||
|
description: `${file.relPath} has only ${lines} line(s).`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add meaningful content or remove the file.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for overly broad rules (huge files without path scoping)
|
||||||
|
if (!frontmatter?.paths && !frontmatter?.globs && lines > 50) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Large unscoped rule file',
|
||||||
|
description: `${file.relPath} has ${lines} lines and no path scoping. It loads into context for every file interaction.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${lines} lines, no paths: frontmatter`,
|
||||||
|
recommendation: 'Add paths: frontmatter to scope this rule, or split into smaller path-specific rules.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check file extension
|
||||||
|
if (!file.absPath.endsWith('.md')) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Rule file is not .md',
|
||||||
|
description: `${file.relPath} is not a .md file. Only .md files are loaded from rules/.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Rename to .md extension.',
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Collect project file paths for glob matching (limited depth).
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @returns {Promise<string[]>}
|
||||||
|
*/
|
||||||
|
async function collectProjectFiles(targetPath, depth = 0) {
|
||||||
|
if (depth > 4) return [];
|
||||||
|
const SKIP = new Set(['node_modules', '.git', 'dist', 'build', 'coverage', '.next', '.nuxt', 'vendor']);
|
||||||
|
const files = [];
|
||||||
|
|
||||||
|
let entries;
|
||||||
|
try {
|
||||||
|
entries = await readdir(targetPath, { withFileTypes: true });
|
||||||
|
} catch {
|
||||||
|
return files;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const entry of entries) {
|
||||||
|
const fullPath = join(targetPath, entry.name);
|
||||||
|
if (entry.isFile()) {
|
||||||
|
files.push(fullPath);
|
||||||
|
} else if (entry.isDirectory() && !SKIP.has(entry.name) && !entry.name.startsWith('.')) {
|
||||||
|
const subFiles = await collectProjectFiles(fullPath, depth + 1);
|
||||||
|
files.push(...subFiles);
|
||||||
|
if (files.length > 5000) break; // Safety limit
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return files;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Count how many files match a simplified glob pattern.
|
||||||
|
* Supports: *, **, specific extensions.
|
||||||
|
* @param {string} pattern
|
||||||
|
* @param {string[]} files
|
||||||
|
* @param {string} basePath
|
||||||
|
* @returns {number}
|
||||||
|
*/
|
||||||
|
function countGlobMatches(pattern, files, basePath) {
|
||||||
|
try {
|
||||||
|
const regex = globToRegex(pattern);
|
||||||
|
let count = 0;
|
||||||
|
for (const file of files) {
|
||||||
|
const rel = relative(basePath, file);
|
||||||
|
if (regex.test(rel)) count++;
|
||||||
|
}
|
||||||
|
return count;
|
||||||
|
} catch {
|
||||||
|
return -1; // Pattern parsing error — don't report as orphan
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Convert a simple glob pattern to a regex.
|
||||||
|
* Handles ** matching zero or more path segments.
|
||||||
|
* @param {string} pattern
|
||||||
|
* @returns {RegExp}
|
||||||
|
*/
|
||||||
|
function globToRegex(pattern) {
|
||||||
|
let regex = pattern
|
||||||
|
.replace(/\./g, '\\.')
|
||||||
|
.replace(/\/\*\*\//g, '{{GLOBSTAR_SLASH}}')
|
||||||
|
.replace(/\*\*/g, '{{GLOBSTAR}}')
|
||||||
|
.replace(/\*/g, '[^/]*')
|
||||||
|
.replace(/\{\{GLOBSTAR_SLASH\}\}/g, '(?:/.+/|/)') // **/ matches 0+ intermediate dirs
|
||||||
|
.replace(/\{\{GLOBSTAR\}\}/g, '.*')
|
||||||
|
.replace(/\?/g, '[^/]');
|
||||||
|
|
||||||
|
// Handle leading patterns
|
||||||
|
if (!regex.startsWith('.*') && !regex.startsWith('/')) {
|
||||||
|
regex = '(?:^|/)' + regex;
|
||||||
|
}
|
||||||
|
|
||||||
|
return new RegExp(regex);
|
||||||
|
}
|
||||||
248
plugins/config-audit/scanners/scan-orchestrator.mjs
Normal file
248
plugins/config-audit/scanners/scan-orchestrator.mjs
Normal file
|
|
@ -0,0 +1,248 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Config-Audit Scan Orchestrator
|
||||||
|
* Runs all registered scanners sequentially, collects findings, outputs JSON envelope.
|
||||||
|
* Usage: node scan-orchestrator.mjs <target-path> [--output-file path] [--save-baseline] [--baseline path]
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve, sep } from 'node:path';
|
||||||
|
import { readFile, writeFile } from 'node:fs/promises';
|
||||||
|
import { resetCounter } from './lib/output.mjs';
|
||||||
|
import { envelope } from './lib/output.mjs';
|
||||||
|
import { discoverConfigFiles, discoverConfigFilesMulti, discoverFullMachinePaths } from './lib/file-discovery.mjs';
|
||||||
|
import { loadSuppressions, applySuppressions, formatSuppressionSummary } from './lib/suppression.mjs';
|
||||||
|
|
||||||
|
// Scanner registry — import order determines execution order
|
||||||
|
import { scan as scanClaudeMd } from './claude-md-linter.mjs';
|
||||||
|
import { scan as scanSettings } from './settings-validator.mjs';
|
||||||
|
import { scan as scanHooks } from './hook-validator.mjs';
|
||||||
|
import { scan as scanRules } from './rules-validator.mjs';
|
||||||
|
import { scan as scanMcp } from './mcp-config-validator.mjs';
|
||||||
|
import { scan as scanImports } from './import-resolver.mjs';
|
||||||
|
import { scan as scanConflicts } from './conflict-detector.mjs';
|
||||||
|
import { scan as scanGap } from './feature-gap-scanner.mjs';
|
||||||
|
|
||||||
|
// Directory names that identify test fixture / example directories
|
||||||
|
const FIXTURE_DIR_NAMES = ['tests', 'examples', '__tests__', 'test-fixtures'];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if a finding originates from a test fixture or example directory
|
||||||
|
* relative to the scan target. Only filters when the finding's path extends
|
||||||
|
* beyond the target into a fixture subdirectory — if the target itself is
|
||||||
|
* a fixture directory, findings are NOT filtered.
|
||||||
|
* @param {object} f - Finding object
|
||||||
|
* @param {string} targetPath - Resolved scan target path
|
||||||
|
* @returns {boolean}
|
||||||
|
*/
|
||||||
|
function isFixturePath(f, targetPath) {
|
||||||
|
const p = f.file || f.path || f.location || '';
|
||||||
|
if (!p || !p.startsWith(targetPath)) return false;
|
||||||
|
// Get the path relative to target, then check if it passes through a fixture dir
|
||||||
|
const rel = p.slice(targetPath.length);
|
||||||
|
return FIXTURE_DIR_NAMES.some(dir => rel.includes(sep + dir + sep));
|
||||||
|
}
|
||||||
|
|
||||||
|
const SCANNERS = [
|
||||||
|
{ name: 'CML', fn: scanClaudeMd, label: 'CLAUDE.md Linter' },
|
||||||
|
{ name: 'SET', fn: scanSettings, label: 'Settings Validator' },
|
||||||
|
{ name: 'HKV', fn: scanHooks, label: 'Hook Validator' },
|
||||||
|
{ name: 'RUL', fn: scanRules, label: 'Rules Validator' },
|
||||||
|
{ name: 'MCP', fn: scanMcp, label: 'MCP Config Validator' },
|
||||||
|
{ name: 'IMP', fn: scanImports, label: 'Import Resolver' },
|
||||||
|
{ name: 'CNF', fn: scanConflicts, label: 'Conflict Detector' },
|
||||||
|
{ name: 'GAP', fn: scanGap, label: 'Feature Gap Scanner' },
|
||||||
|
];
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run all scanners against target path.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {boolean} [opts.includeGlobal=false]
|
||||||
|
* @param {boolean} [opts.fullMachine=false] - Scan all known locations across the machine
|
||||||
|
* @param {boolean} [opts.suppress=true] - Apply suppressions from .config-audit-ignore
|
||||||
|
* @param {boolean} [opts.filterFixtures=true] - Exclude findings from test/example paths
|
||||||
|
* @returns {Promise<object>} Full envelope with all results
|
||||||
|
*/
|
||||||
|
// Exported for testing
|
||||||
|
export { isFixturePath, FIXTURE_DIR_NAMES };
|
||||||
|
|
||||||
|
export async function runAllScanners(targetPath, opts = {}) {
|
||||||
|
const start = Date.now();
|
||||||
|
const resolvedPath = resolve(targetPath);
|
||||||
|
|
||||||
|
// Shared file discovery — scanners reuse this
|
||||||
|
let discovery;
|
||||||
|
if (opts.fullMachine) {
|
||||||
|
const roots = await discoverFullMachinePaths();
|
||||||
|
discovery = await discoverConfigFilesMulti(roots);
|
||||||
|
} else {
|
||||||
|
discovery = await discoverConfigFiles(resolvedPath, {
|
||||||
|
includeGlobal: opts.includeGlobal || false,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const results = [];
|
||||||
|
|
||||||
|
for (const scanner of SCANNERS) {
|
||||||
|
resetCounter();
|
||||||
|
const scanStart = Date.now();
|
||||||
|
try {
|
||||||
|
const result = await scanner.fn(resolvedPath, discovery);
|
||||||
|
results.push(result);
|
||||||
|
const count = result.findings.length;
|
||||||
|
process.stderr.write(` [${scanner.name}] ${scanner.label}: ${count} finding(s) (${Date.now() - scanStart}ms)\n`);
|
||||||
|
} catch (err) {
|
||||||
|
results.push({
|
||||||
|
scanner: scanner.name,
|
||||||
|
status: 'error',
|
||||||
|
files_scanned: 0,
|
||||||
|
duration_ms: Date.now() - scanStart,
|
||||||
|
findings: [],
|
||||||
|
counts: { critical: 0, high: 0, medium: 0, low: 0, info: 0 },
|
||||||
|
error: err.message,
|
||||||
|
});
|
||||||
|
process.stderr.write(` [${scanner.name}] ${scanner.label}: ERROR — ${err.message}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter findings from test fixtures / examples (unless disabled)
|
||||||
|
const shouldFilterFixtures = opts.filterFixtures !== false;
|
||||||
|
let fixtureFindings = [];
|
||||||
|
|
||||||
|
if (shouldFilterFixtures) {
|
||||||
|
for (const result of results) {
|
||||||
|
const active = [];
|
||||||
|
const fixture = [];
|
||||||
|
for (const f of result.findings) {
|
||||||
|
if (isFixturePath(f, resolvedPath)) {
|
||||||
|
fixture.push(f);
|
||||||
|
} else {
|
||||||
|
active.push(f);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (fixture.length > 0) {
|
||||||
|
fixtureFindings.push(...fixture);
|
||||||
|
result.findings = active;
|
||||||
|
result.counts = { critical: 0, high: 0, medium: 0, low: 0, info: 0 };
|
||||||
|
for (const f of active) {
|
||||||
|
if (result.counts[f.severity] !== undefined) result.counts[f.severity]++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (fixtureFindings.length > 0) {
|
||||||
|
process.stderr.write(` ${fixtureFindings.length} finding(s) from test fixtures excluded\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply suppressions (unless disabled)
|
||||||
|
const shouldSuppress = opts.suppress !== false;
|
||||||
|
let suppressedFindings = [];
|
||||||
|
|
||||||
|
if (shouldSuppress) {
|
||||||
|
const { suppressions } = await loadSuppressions(resolvedPath);
|
||||||
|
if (suppressions.length > 0) {
|
||||||
|
for (const result of results) {
|
||||||
|
const { active, suppressed } = applySuppressions(result.findings, suppressions);
|
||||||
|
suppressedFindings.push(...suppressed);
|
||||||
|
result.findings = active;
|
||||||
|
// Recalculate counts
|
||||||
|
result.counts = { critical: 0, high: 0, medium: 0, low: 0, info: 0 };
|
||||||
|
for (const f of active) {
|
||||||
|
if (result.counts[f.severity] !== undefined) result.counts[f.severity]++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (suppressedFindings.length > 0) {
|
||||||
|
process.stderr.write(` ${formatSuppressionSummary(suppressedFindings)}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const totalMs = Date.now() - start;
|
||||||
|
const env = envelope(resolvedPath, results, totalMs);
|
||||||
|
if (fixtureFindings.length > 0) {
|
||||||
|
env.fixture_findings = fixtureFindings;
|
||||||
|
}
|
||||||
|
if (suppressedFindings.length > 0) {
|
||||||
|
env.suppressed_findings = suppressedFindings;
|
||||||
|
}
|
||||||
|
return env;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- CLI entry point ---
|
||||||
|
async function main() {
|
||||||
|
const args = process.argv.slice(2);
|
||||||
|
let targetPath = '.';
|
||||||
|
let outputFile = null;
|
||||||
|
let saveBaseline = false;
|
||||||
|
let baselinePath = null;
|
||||||
|
|
||||||
|
for (let i = 0; i < args.length; i++) {
|
||||||
|
if (args[i] === '--output-file' && args[i + 1]) {
|
||||||
|
outputFile = args[++i];
|
||||||
|
} else if (args[i] === '--save-baseline') {
|
||||||
|
saveBaseline = true;
|
||||||
|
} else if (args[i] === '--baseline' && args[i + 1]) {
|
||||||
|
baselinePath = args[++i];
|
||||||
|
} else if (args[i] === '--global') {
|
||||||
|
// handled below
|
||||||
|
} else if (args[i] === '--full-machine') {
|
||||||
|
// handled below
|
||||||
|
} else if (args[i] === '--no-suppress') {
|
||||||
|
// handled below
|
||||||
|
} else if (args[i] === '--include-fixtures') {
|
||||||
|
// handled below
|
||||||
|
} else if (!args[i].startsWith('-')) {
|
||||||
|
targetPath = args[i];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const includeGlobal = args.includes('--global');
|
||||||
|
const fullMachine = args.includes('--full-machine');
|
||||||
|
const suppress = !args.includes('--no-suppress');
|
||||||
|
const filterFixtures = !args.includes('--include-fixtures');
|
||||||
|
|
||||||
|
process.stderr.write(`Config-Audit Scanner v2.2.0\n`);
|
||||||
|
process.stderr.write(`Target: ${resolve(targetPath)}\n`);
|
||||||
|
process.stderr.write(`Scope: ${fullMachine ? 'full-machine' : includeGlobal ? 'global' : 'project'}\n`);
|
||||||
|
process.stderr.write(`Fixtures: ${filterFixtures ? 'excluded' : 'included'}\n\n`);
|
||||||
|
|
||||||
|
const result = await runAllScanners(targetPath, { includeGlobal, fullMachine, suppress, filterFixtures });
|
||||||
|
|
||||||
|
const json = JSON.stringify(result, null, 2);
|
||||||
|
|
||||||
|
if (outputFile) {
|
||||||
|
await writeFile(outputFile, json, 'utf-8');
|
||||||
|
process.stderr.write(`\nResults written to ${outputFile}\n`);
|
||||||
|
} else {
|
||||||
|
process.stdout.write(json + '\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (saveBaseline) {
|
||||||
|
const bPath = baselinePath || resolve(targetPath, '.config-audit-baseline.json');
|
||||||
|
await writeFile(bPath, json, 'utf-8');
|
||||||
|
process.stderr.write(`Baseline saved to ${bPath}\n`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Summary
|
||||||
|
const agg = result.aggregate;
|
||||||
|
process.stderr.write(`\n--- Summary ---\n`);
|
||||||
|
process.stderr.write(`Findings: ${agg.total_findings} (C:${agg.counts.critical} H:${agg.counts.high} M:${agg.counts.medium} L:${agg.counts.low} I:${agg.counts.info})\n`);
|
||||||
|
process.stderr.write(`Risk: ${agg.risk_score}/100 (${agg.risk_band})\n`);
|
||||||
|
process.stderr.write(`Verdict: ${agg.verdict}\n`);
|
||||||
|
|
||||||
|
// Exit code
|
||||||
|
if (agg.verdict === 'FAIL') process.exit(2);
|
||||||
|
if (agg.verdict === 'WARNING') process.exit(1);
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only run CLI if invoked directly
|
||||||
|
const isDirectRun = process.argv[1] && resolve(process.argv[1]) === resolve(new URL(import.meta.url).pathname);
|
||||||
|
if (isDirectRun) {
|
||||||
|
main().catch(err => {
|
||||||
|
process.stderr.write(`Fatal: ${err.message}\n`);
|
||||||
|
process.exit(3);
|
||||||
|
});
|
||||||
|
}
|
||||||
178
plugins/config-audit/scanners/self-audit.mjs
Normal file
178
plugins/config-audit/scanners/self-audit.mjs
Normal file
|
|
@ -0,0 +1,178 @@
|
||||||
|
#!/usr/bin/env node
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Config-Audit Self-Audit
|
||||||
|
* Runs the plugin's own scanners on its own configuration.
|
||||||
|
* CLI: node self-audit.mjs [--json] [--fix]
|
||||||
|
* Exit codes: 0=PASS (no critical/high), 1=WARN (high findings), 2=FAIL (critical findings)
|
||||||
|
* Zero external dependencies.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { resolve, dirname } from 'node:path';
|
||||||
|
import { fileURLToPath } from 'node:url';
|
||||||
|
import { runAllScanners } from './scan-orchestrator.mjs';
|
||||||
|
import { scan as scanPluginHealth } from './plugin-health-scanner.mjs';
|
||||||
|
import { scoreByArea } from './lib/scoring.mjs';
|
||||||
|
import { gradeFromPassRate } from './lib/severity.mjs';
|
||||||
|
import { loadSuppressions, applySuppressions } from './lib/suppression.mjs';
|
||||||
|
|
||||||
|
const __dirname = dirname(fileURLToPath(import.meta.url));
|
||||||
|
const PLUGIN_ROOT = resolve(__dirname, '..');
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Run self-audit on this plugin.
|
||||||
|
* @param {object} [opts]
|
||||||
|
* @param {boolean} [opts.fix=false] - Run fix-engine on auto-fixable findings
|
||||||
|
* @returns {Promise<object>} Combined result
|
||||||
|
*/
|
||||||
|
export async function runSelfAudit(opts = {}) {
|
||||||
|
const pluginDir = PLUGIN_ROOT;
|
||||||
|
|
||||||
|
// 1. Run all config scanners on plugin root
|
||||||
|
// Fixture filtering is handled automatically by runAllScanners (filterFixtures defaults to true)
|
||||||
|
const configEnvelope = await runAllScanners(pluginDir);
|
||||||
|
|
||||||
|
// 2. Run plugin health scanner + apply suppressions
|
||||||
|
const pluginHealthResult = await scanPluginHealth(pluginDir);
|
||||||
|
const { suppressions } = await loadSuppressions(pluginDir);
|
||||||
|
if (suppressions.length > 0) {
|
||||||
|
const { active, suppressed } = applySuppressions(pluginHealthResult.findings, suppressions);
|
||||||
|
pluginHealthResult.findings = active;
|
||||||
|
pluginHealthResult.suppressedFindings = suppressed;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Score config quality
|
||||||
|
const areaScores = scoreByArea(configEnvelope.scanners);
|
||||||
|
const avgScore = areaScores.areas.length > 0
|
||||||
|
? Math.round(areaScores.areas.reduce((s, a) => s + a.score, 0) / areaScores.areas.length)
|
||||||
|
: 0;
|
||||||
|
const configGrade = gradeFromPassRate(avgScore);
|
||||||
|
|
||||||
|
// 4. Score plugin health
|
||||||
|
const pluginIssueCount = pluginHealthResult.findings.length;
|
||||||
|
const pluginScore = Math.max(0, 100 - pluginIssueCount * 10);
|
||||||
|
const pluginGrade = gradeFromPassRate(pluginScore);
|
||||||
|
|
||||||
|
// 5. Determine overall result
|
||||||
|
const allFindings = [
|
||||||
|
...configEnvelope.scanners.flatMap(s => s.findings),
|
||||||
|
...pluginHealthResult.findings,
|
||||||
|
];
|
||||||
|
|
||||||
|
const hasCritical = allFindings.some(f => f.severity === 'critical');
|
||||||
|
const hasHigh = allFindings.some(f => f.severity === 'high');
|
||||||
|
let exitCode = 0;
|
||||||
|
let verdict = 'PASS';
|
||||||
|
if (hasCritical) { exitCode = 2; verdict = 'FAIL'; }
|
||||||
|
else if (hasHigh) { exitCode = 1; verdict = 'WARN'; }
|
||||||
|
|
||||||
|
// 6. Optionally fix
|
||||||
|
let fixResult = null;
|
||||||
|
if (opts.fix && allFindings.some(f => f.autoFixable)) {
|
||||||
|
try {
|
||||||
|
const { planFixes, applyFixes } = await import('./fix-engine.mjs');
|
||||||
|
const plan = planFixes(configEnvelope);
|
||||||
|
if (plan.length > 0) {
|
||||||
|
fixResult = await applyFixes(plan);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Fix engine unavailable or failed — non-fatal
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
pluginDir,
|
||||||
|
configGrade,
|
||||||
|
configScore: avgScore,
|
||||||
|
pluginGrade,
|
||||||
|
pluginScore,
|
||||||
|
configEnvelope,
|
||||||
|
pluginHealthResult,
|
||||||
|
allFindings,
|
||||||
|
exitCode,
|
||||||
|
verdict,
|
||||||
|
fixResult,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Format self-audit result for terminal display.
|
||||||
|
* @param {object} result - From runSelfAudit()
|
||||||
|
* @returns {string}
|
||||||
|
*/
|
||||||
|
export function formatSelfAudit(result) {
|
||||||
|
const lines = [];
|
||||||
|
lines.push('\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501');
|
||||||
|
lines.push(' Config-Audit Self-Audit');
|
||||||
|
lines.push('\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501');
|
||||||
|
lines.push('');
|
||||||
|
lines.push(` Plugin health: ${result.pluginGrade} (${result.pluginScore})`);
|
||||||
|
lines.push(` Config quality: ${result.configGrade} (${result.configScore})`);
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Issues summary
|
||||||
|
const nonInfo = result.allFindings.filter(f => f.severity !== 'info');
|
||||||
|
if (nonInfo.length > 0) {
|
||||||
|
lines.push(` Issues (${nonInfo.length}):`);
|
||||||
|
for (const f of nonInfo.slice(0, 10)) {
|
||||||
|
lines.push(` - [${f.severity}] ${f.title}`);
|
||||||
|
}
|
||||||
|
if (nonInfo.length > 10) {
|
||||||
|
lines.push(` ...and ${nonInfo.length - 10} more`);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
lines.push(' Issues (0)');
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('');
|
||||||
|
|
||||||
|
// Fix results
|
||||||
|
if (result.fixResult) {
|
||||||
|
const applied = result.fixResult.filter(r => r.status === 'applied').length;
|
||||||
|
lines.push(` Auto-fix: ${applied} fix(es) applied`);
|
||||||
|
lines.push('');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verdict
|
||||||
|
if (result.verdict === 'PASS') {
|
||||||
|
lines.push(' Self-audit: PASS');
|
||||||
|
lines.push(' (No critical or high findings)');
|
||||||
|
} else if (result.verdict === 'WARN') {
|
||||||
|
lines.push(' Self-audit: WARN');
|
||||||
|
lines.push(' (High-severity findings detected)');
|
||||||
|
} else {
|
||||||
|
lines.push(' Self-audit: FAIL');
|
||||||
|
lines.push(' (Critical findings detected)');
|
||||||
|
}
|
||||||
|
|
||||||
|
lines.push('');
|
||||||
|
lines.push('\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501');
|
||||||
|
|
||||||
|
return lines.join('\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- CLI entry point ---
|
||||||
|
async function main() {
|
||||||
|
const args = process.argv.slice(2);
|
||||||
|
const jsonMode = args.includes('--json');
|
||||||
|
const fixMode = args.includes('--fix');
|
||||||
|
|
||||||
|
const result = await runSelfAudit({ fix: fixMode });
|
||||||
|
|
||||||
|
if (jsonMode) {
|
||||||
|
const json = JSON.stringify(result, null, 2) + '\n';
|
||||||
|
await new Promise(resolve => process.stdout.write(json, resolve));
|
||||||
|
} else {
|
||||||
|
process.stderr.write('\n' + formatSelfAudit(result) + '\n');
|
||||||
|
}
|
||||||
|
|
||||||
|
process.exitCode = result.exitCode;
|
||||||
|
}
|
||||||
|
|
||||||
|
const isDirectRun = process.argv[1] && resolve(process.argv[1]) === resolve(fileURLToPath(import.meta.url));
|
||||||
|
if (isDirectRun) {
|
||||||
|
main().catch(err => {
|
||||||
|
process.stderr.write(`Fatal: ${err.message}\n`);
|
||||||
|
process.exit(3);
|
||||||
|
});
|
||||||
|
}
|
||||||
224
plugins/config-audit/scanners/settings-validator.mjs
Normal file
224
plugins/config-audit/scanners/settings-validator.mjs
Normal file
|
|
@ -0,0 +1,224 @@
|
||||||
|
/**
|
||||||
|
* SET Scanner — Settings.json Validator
|
||||||
|
* Validates schema, detects unknown/deprecated keys, type mismatches.
|
||||||
|
* Finding IDs: CA-SET-NNN
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { readTextFile } from './lib/file-discovery.mjs';
|
||||||
|
import { finding, scannerResult } from './lib/output.mjs';
|
||||||
|
import { SEVERITY } from './lib/severity.mjs';
|
||||||
|
import { parseJson } from './lib/yaml-parser.mjs';
|
||||||
|
import { extractKeys } from './lib/string-utils.mjs';
|
||||||
|
|
||||||
|
const SCANNER = 'SET';
|
||||||
|
|
||||||
|
/** Known top-level settings.json keys (as of April 2026) */
|
||||||
|
const KNOWN_KEYS = new Set([
|
||||||
|
'agent', 'allowedChannelPlugins', 'allowedHttpHookUrls', 'allowedMcpServers',
|
||||||
|
'allowManagedHooksOnly', 'allowManagedMcpServersOnly', 'allowManagedPermissionRulesOnly',
|
||||||
|
'alwaysThinkingEnabled', 'apiKeyHelper', 'attribution', 'autoMemoryDirectory',
|
||||||
|
'autoMemoryEnabled', 'autoMode', 'autoUpdatesChannel', 'availableModels',
|
||||||
|
'awsAuthRefresh', 'awsCredentialExport', 'blockedMarketplaces', 'channelsEnabled',
|
||||||
|
'cleanupPeriodDays', 'claudeMdExcludes', 'companyAnnouncements', 'defaultShell',
|
||||||
|
'deniedMcpServers', 'disableAllHooks', 'disableAutoMode', 'disableDeepLinkRegistration',
|
||||||
|
'disabledMcpjsonServers', 'effortLevel', 'enableAllProjectMcpServers',
|
||||||
|
'enabledMcpjsonServers', 'enabledPlugins', 'env', 'extraKnownMarketplaces',
|
||||||
|
'fastModePerSessionOptIn', 'feedbackSurveyRate', 'fileSuggestion',
|
||||||
|
'forceLoginMethod', 'forceLoginOrgUUID', 'hooks', 'httpHookAllowedEnvVars',
|
||||||
|
'includeCoAuthoredBy', 'includeGitInstructions', 'language', 'model',
|
||||||
|
'modelOverrides', 'otelHeadersHelper', 'outputStyle', 'permissions',
|
||||||
|
'plansDirectory', 'pluginTrustMessage', 'prefersReducedMotion',
|
||||||
|
'respectGitignore', 'showClearContextOnPlanAccept', 'showThinkingSummaries',
|
||||||
|
'spinnerTipsEnabled', 'spinnerTipsOverride', 'spinnerVerbs', 'statusLine',
|
||||||
|
'strictKnownMarketplaces', 'useAutoModeDuringPlan', 'voiceEnabled',
|
||||||
|
'worktree', '$schema',
|
||||||
|
]);
|
||||||
|
|
||||||
|
/** Deprecated keys with migration info */
|
||||||
|
const DEPRECATED_KEYS = new Map([
|
||||||
|
['includeCoAuthoredBy', 'Use "attribution" instead'],
|
||||||
|
]);
|
||||||
|
|
||||||
|
/** Keys that require specific types */
|
||||||
|
const TYPE_CHECKS = new Map([
|
||||||
|
['alwaysThinkingEnabled', 'boolean'],
|
||||||
|
['autoMemoryEnabled', 'boolean'],
|
||||||
|
['channelsEnabled', 'boolean'],
|
||||||
|
['cleanupPeriodDays', 'number'],
|
||||||
|
['disableAllHooks', 'boolean'],
|
||||||
|
['effortLevel', 'string'],
|
||||||
|
['enableAllProjectMcpServers', 'boolean'],
|
||||||
|
['fastModePerSessionOptIn', 'boolean'],
|
||||||
|
['feedbackSurveyRate', 'number'],
|
||||||
|
['includeGitInstructions', 'boolean'],
|
||||||
|
['language', 'string'],
|
||||||
|
['model', 'string'],
|
||||||
|
['outputStyle', 'string'],
|
||||||
|
['prefersReducedMotion', 'boolean'],
|
||||||
|
['respectGitignore', 'boolean'],
|
||||||
|
['showThinkingSummaries', 'boolean'],
|
||||||
|
['spinnerTipsEnabled', 'boolean'],
|
||||||
|
['voiceEnabled', 'boolean'],
|
||||||
|
]);
|
||||||
|
|
||||||
|
/** Valid effortLevel values */
|
||||||
|
const VALID_EFFORT_LEVELS = new Set(['low', 'medium', 'high', 'max']);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Scan all settings.json files discovered.
|
||||||
|
* @param {string} targetPath
|
||||||
|
* @param {{ files: import('./lib/file-discovery.mjs').ConfigFile[] }} discovery
|
||||||
|
* @returns {Promise<object>}
|
||||||
|
*/
|
||||||
|
export async function scan(targetPath, discovery) {
|
||||||
|
const start = Date.now();
|
||||||
|
const settingsFiles = discovery.files.filter(f => f.type === 'settings-json');
|
||||||
|
const findings = [];
|
||||||
|
let filesScanned = 0;
|
||||||
|
|
||||||
|
if (settingsFiles.length === 0) {
|
||||||
|
return scannerResult(SCANNER, 'skipped', [], 0, Date.now() - start);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const file of settingsFiles) {
|
||||||
|
const content = await readTextFile(file.absPath);
|
||||||
|
if (!content) continue;
|
||||||
|
filesScanned++;
|
||||||
|
|
||||||
|
const parsed = parseJson(content);
|
||||||
|
if (parsed === null) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Invalid JSON in settings file',
|
||||||
|
description: `${file.relPath} contains invalid JSON and will be ignored by Claude Code.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Fix JSON syntax errors. Use a JSON validator.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for unknown keys
|
||||||
|
for (const key of Object.keys(parsed)) {
|
||||||
|
if (!KNOWN_KEYS.has(key)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Unknown settings key',
|
||||||
|
description: `${file.relPath}: "${key}" is not a recognized settings.json key. It will be silently ignored.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: key,
|
||||||
|
recommendation: 'Check spelling. See https://json.schemastore.org/claude-code-settings.json for valid keys.',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for deprecated keys
|
||||||
|
for (const [key, migration] of DEPRECATED_KEYS) {
|
||||||
|
if (parsed[key] !== undefined) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Deprecated settings key',
|
||||||
|
description: `${file.relPath}: "${key}" is deprecated. ${migration}`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${key}: ${JSON.stringify(parsed[key])}`,
|
||||||
|
recommendation: migration,
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Type validation
|
||||||
|
for (const [key, expectedType] of TYPE_CHECKS) {
|
||||||
|
if (parsed[key] !== undefined && typeof parsed[key] !== expectedType) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.high,
|
||||||
|
title: 'Type mismatch in settings',
|
||||||
|
description: `${file.relPath}: "${key}" should be ${expectedType}, got ${typeof parsed[key]}.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `${key}: ${JSON.stringify(parsed[key])} (${typeof parsed[key]})`,
|
||||||
|
recommendation: `Change "${key}" to a ${expectedType} value.`,
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// effortLevel value check
|
||||||
|
if (parsed.effortLevel && !VALID_EFFORT_LEVELS.has(parsed.effortLevel)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'Invalid effortLevel value',
|
||||||
|
description: `${file.relPath}: effortLevel "${parsed.effortLevel}" is not valid.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: `effortLevel: "${parsed.effortLevel}"`,
|
||||||
|
recommendation: `Use one of: ${[...VALID_EFFORT_LEVELS].join(', ')}`,
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Missing $schema hint
|
||||||
|
if (!parsed.$schema) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.info,
|
||||||
|
title: 'Missing $schema reference',
|
||||||
|
description: `${file.relPath} lacks a $schema reference. Adding one enables autocomplete in VS Code/Cursor.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add: "$schema": "https://json.schemastore.org/claude-code-settings.json"',
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Permissions checks
|
||||||
|
if (parsed.permissions) {
|
||||||
|
const perms = parsed.permissions;
|
||||||
|
|
||||||
|
if (!perms.deny || (Array.isArray(perms.deny) && perms.deny.length === 0)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.medium,
|
||||||
|
title: 'No deny rules configured',
|
||||||
|
description: `${file.relPath}: No permission deny rules. Claude can access all files including .env and secrets.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add deny rules for sensitive files: "deny": ["Read(./.env)", "Read(./secrets/**)"]',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!perms.allow || (Array.isArray(perms.allow) && perms.allow.length === 0)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.low,
|
||||||
|
title: 'No allow rules configured',
|
||||||
|
description: `${file.relPath}: No permission allow rules. This means frequent permission prompts for common operations.`,
|
||||||
|
file: file.absPath,
|
||||||
|
recommendation: 'Add allow rules for common tools: "allow": ["Bash(npm run *)", "Read(src/**)"]',
|
||||||
|
autoFixable: false,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// hooks checks (basic — detailed in hook-validator)
|
||||||
|
if (parsed.hooks) {
|
||||||
|
if (Array.isArray(parsed.hooks)) {
|
||||||
|
findings.push(finding({
|
||||||
|
scanner: SCANNER,
|
||||||
|
severity: SEVERITY.critical,
|
||||||
|
title: 'Hooks configured as array instead of object',
|
||||||
|
description: `${file.relPath}: "hooks" must be an object with event keys, not an array. All hooks will be ignored.`,
|
||||||
|
file: file.absPath,
|
||||||
|
evidence: '"hooks": [...]',
|
||||||
|
recommendation: 'Change to object format: "hooks": { "PreToolUse": [...] }',
|
||||||
|
autoFixable: true,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return scannerResult(SCANNER, 'ok', findings, filesScanned, Date.now() - start);
|
||||||
|
}
|
||||||
101
plugins/config-audit/skills/config-hierarchy/SKILL.md
Normal file
101
plugins/config-audit/skills/config-hierarchy/SKILL.md
Normal file
|
|
@ -0,0 +1,101 @@
|
||||||
|
---
|
||||||
|
name: config-hierarchy
|
||||||
|
description: |
|
||||||
|
This skill should be used when the user asks about Claude Code configuration files,
|
||||||
|
CLAUDE.md hierarchy, settings.json structure, MCP server configuration, or rules directory patterns.
|
||||||
|
Triggers on: "CLAUDE.md hierarchy", "config file locations", "settings.json", ".mcp.json",
|
||||||
|
"rules directory", "configuration inheritance", "where does Claude read config from".
|
||||||
|
---
|
||||||
|
|
||||||
|
# Claude Code Configuration Hierarchy
|
||||||
|
|
||||||
|
Comprehensive reference for understanding Claude Code's configuration system.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Claude Code loads configuration from multiple sources, with a defined precedence order. Understanding this hierarchy is crucial for effective configuration management.
|
||||||
|
|
||||||
|
## Configuration Sources (By Priority)
|
||||||
|
|
||||||
|
### 1. CLAUDE.md Hierarchy
|
||||||
|
|
||||||
|
From highest to lowest priority:
|
||||||
|
|
||||||
|
| Level | Location | Shared? | Purpose |
|
||||||
|
|-------|----------|---------|---------|
|
||||||
|
| **Managed** | System-level paths | All users | Enterprise/organization policies |
|
||||||
|
| **Project local** | `./CLAUDE.local.md` | No (gitignored) | Machine-specific project overrides |
|
||||||
|
| **Project shared** | `./CLAUDE.md` or `./.claude/CLAUDE.md` | Yes (git) | Team-shared project instructions |
|
||||||
|
| **Project rules** | `./.claude/rules/*.md` | Yes (git) | Modular, path-scoped rules |
|
||||||
|
| **User global** | `~/.claude/CLAUDE.md` | No | Personal defaults |
|
||||||
|
|
||||||
|
### 2. Settings.json Hierarchy
|
||||||
|
|
||||||
|
| Level | Location | Purpose |
|
||||||
|
|-------|----------|---------|
|
||||||
|
| **Managed** | System `managed-settings.json` | Enterprise policies (highest) |
|
||||||
|
| **CLI args** | Command line | Session-only overrides |
|
||||||
|
| **Local** | `.claude/settings.local.json` | Machine-specific project |
|
||||||
|
| **Project** | `.claude/settings.json` | Team-shared project |
|
||||||
|
| **User** | `~/.claude/settings.json` | Personal defaults (lowest) |
|
||||||
|
|
||||||
|
### 3. Other Configuration Files
|
||||||
|
|
||||||
|
| File | Location | Purpose |
|
||||||
|
|------|----------|---------|
|
||||||
|
| `.mcp.json` | Project root | MCP server definitions for project |
|
||||||
|
| `~/.claude.json` | Home | OAuth tokens, global MCP servers, state |
|
||||||
|
| `.claudeignore` | Project | File/directory exclusions |
|
||||||
|
| `~/.claude/agents/` | User | Custom subagent definitions |
|
||||||
|
|
||||||
|
## Managed Configuration Paths
|
||||||
|
|
||||||
|
For enterprise/organization-wide settings:
|
||||||
|
|
||||||
|
| Platform | Path |
|
||||||
|
|----------|------|
|
||||||
|
| macOS | `/Library/Application Support/ClaudeCode/CLAUDE.md` |
|
||||||
|
| Linux | `/etc/claude-code/CLAUDE.md` |
|
||||||
|
| Windows | `C:\Program Files\ClaudeCode\CLAUDE.md` |
|
||||||
|
|
||||||
|
## Key Concepts
|
||||||
|
|
||||||
|
### Inheritance
|
||||||
|
|
||||||
|
- Files are loaded from current directory upward to root
|
||||||
|
- Subtree files loaded on-demand when entering directories
|
||||||
|
- Lower priority files provide defaults
|
||||||
|
- Higher priority files override specific settings
|
||||||
|
|
||||||
|
### Path-Scoped Rules
|
||||||
|
|
||||||
|
In `.claude/rules/`, files can be scoped to specific paths:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
globs: ["src/**/*.ts", "src/**/*.tsx"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# TypeScript Rules
|
||||||
|
These rules apply only to TypeScript files in src/
|
||||||
|
```
|
||||||
|
|
||||||
|
### @Imports
|
||||||
|
|
||||||
|
CLAUDE.md files can import other files:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Project CLAUDE.md
|
||||||
|
|
||||||
|
@./docs/api.md
|
||||||
|
@./CONTRIBUTING.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Further Reading
|
||||||
|
|
||||||
|
See the reference files for detailed schemas:
|
||||||
|
- `references/claude-md-structure.md` - CLAUDE.md sections
|
||||||
|
- `references/settings-json-schema.md` - settings.json keys
|
||||||
|
- `references/mcp-json-patterns.md` - MCP configuration
|
||||||
|
- `references/rules-directory.md` - Rules pattern
|
||||||
|
- `references/quality-criteria.md` - Quick reference (detailed rubric in `agents/analyzer-agent.md`)
|
||||||
|
|
@ -0,0 +1,103 @@
|
||||||
|
# CLAUDE.md Structure Reference
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
CLAUDE.md files provide context and instructions to Claude Code for your project or globally.
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
| Location | Purpose | Shared? |
|
||||||
|
|----------|---------|---------|
|
||||||
|
| `~/.claude/CLAUDE.md` | Global defaults | No |
|
||||||
|
| `./CLAUDE.md` | Project shared | Yes |
|
||||||
|
| `./.claude/CLAUDE.md` | Alt project location | Yes |
|
||||||
|
| `./CLAUDE.local.md` | Local overrides | No |
|
||||||
|
|
||||||
|
## Common Sections
|
||||||
|
|
||||||
|
### Project Context
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Project Name
|
||||||
|
|
||||||
|
Brief description of what this project does.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
- Technology stack
|
||||||
|
- Key components
|
||||||
|
- Dependencies
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coding Standards
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Coding Standards
|
||||||
|
|
||||||
|
- Language preferences (TypeScript > JavaScript)
|
||||||
|
- Formatting rules
|
||||||
|
- Naming conventions
|
||||||
|
```
|
||||||
|
|
||||||
|
### Commands/Workflows
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Available Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| /build | Build the project |
|
||||||
|
| /test | Run tests |
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Setup
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Development Setup
|
||||||
|
|
||||||
|
1. Install dependencies: `npm install`
|
||||||
|
2. Set environment variables: see `.env.example`
|
||||||
|
3. Run dev server: `npm run dev`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Frontmatter (Optional)
|
||||||
|
|
||||||
|
CLAUDE.md can have YAML frontmatter:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
model: sonnet
|
||||||
|
allowed-tools: Read, Write, Bash
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## @Imports
|
||||||
|
|
||||||
|
Reference other files:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Main CLAUDE.md
|
||||||
|
|
||||||
|
@./docs/architecture.md
|
||||||
|
@./CONTRIBUTING.md
|
||||||
|
```
|
||||||
|
|
||||||
|
The imported files are loaded and included in context.
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Keep it focused**: Don't repeat generic info
|
||||||
|
2. **Update regularly**: Keep sync with project changes
|
||||||
|
3. **Use imports**: Split large files into modules
|
||||||
|
4. **Be specific**: Give concrete examples, not vague guidelines
|
||||||
|
5. **Local for secrets**: Use CLAUDE.local.md for sensitive paths
|
||||||
|
|
||||||
|
## Size Recommendations
|
||||||
|
|
||||||
|
| File | Recommended Size |
|
||||||
|
|------|------------------|
|
||||||
|
| Global CLAUDE.md | 1-2 KB |
|
||||||
|
| Project CLAUDE.md | 2-5 KB |
|
||||||
|
| With imports | Total 5-10 KB |
|
||||||
|
|
||||||
|
Larger files consume more context tokens.
|
||||||
|
|
@ -0,0 +1,137 @@
|
||||||
|
# MCP Server Configuration Reference
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
| Location | Scope |
|
||||||
|
|----------|-------|
|
||||||
|
| `~/.claude.json` → mcpServers | Global (all projects) |
|
||||||
|
| `.mcp.json` | Project-specific |
|
||||||
|
| `.claude/settings.json` → mcpServers | Project-specific |
|
||||||
|
|
||||||
|
## Basic Structure
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"server-name": {
|
||||||
|
"command": "executable",
|
||||||
|
"args": ["arg1", "arg2"],
|
||||||
|
"env": {
|
||||||
|
"KEY": "value"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Server Types
|
||||||
|
|
||||||
|
### stdio (Standard I/O)
|
||||||
|
|
||||||
|
Most common type, runs as subprocess:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@anthropic/mcp-server-filesystem", "/path/to/root"],
|
||||||
|
"env": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSE (Server-Sent Events)
|
||||||
|
|
||||||
|
Connect to remote HTTP server:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"remote-service": {
|
||||||
|
"url": "https://api.example.com/mcp",
|
||||||
|
"headers": {
|
||||||
|
"Authorization": "Bearer ${API_TOKEN}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Patterns
|
||||||
|
|
||||||
|
### Filesystem Server
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@anthropic/mcp-server-filesystem", "."],
|
||||||
|
"env": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Server
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"database": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@anthropic/mcp-server-postgres"],
|
||||||
|
"env": {
|
||||||
|
"DATABASE_URL": "${DATABASE_URL}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Slack Server
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"slack": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@anthropic/mcp-server-slack"],
|
||||||
|
"env": {
|
||||||
|
"SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}",
|
||||||
|
"SLACK_TEAM_ID": "${SLACK_TEAM_ID}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
**Best practice**: Use `${VAR_NAME}` syntax instead of hardcoded values:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"env": {
|
||||||
|
"API_KEY": "${MY_API_KEY}" // Good
|
||||||
|
// "API_KEY": "sk-abc123..." // Bad - exposed secret
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
1. **Never hardcode secrets** in .mcp.json
|
||||||
|
2. **Use environment variable references** (`${VAR}`)
|
||||||
|
3. **.mcp.json should be gitignored** if it contains any sensitive paths
|
||||||
|
4. **Check for secrets** before committing
|
||||||
|
|
||||||
|
## Global vs Project
|
||||||
|
|
||||||
|
### When to use global (~/.claude.json)
|
||||||
|
|
||||||
|
- Servers used across all projects
|
||||||
|
- Personal tools (Slack, email)
|
||||||
|
- Utility servers (filesystem with safe root)
|
||||||
|
|
||||||
|
### When to use project (.mcp.json)
|
||||||
|
|
||||||
|
- Project-specific databases
|
||||||
|
- Project APIs
|
||||||
|
- Specialized tools for this codebase
|
||||||
|
|
@ -0,0 +1,27 @@
|
||||||
|
# CLAUDE.md Quality Criteria
|
||||||
|
|
||||||
|
> **Authoritative source:** The detailed scoring rubric, red flags, section detection patterns, and quality signals are maintained in `agents/analyzer-agent.md` under "## CLAUDE.md Quality Rubric (100 points)".
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
| Criterion | Points |
|
||||||
|
|-----------|--------|
|
||||||
|
| Commands/Workflows | 20 |
|
||||||
|
| Architecture Clarity | 20 |
|
||||||
|
| Non-Obvious Patterns | 15 |
|
||||||
|
| Conciseness | 15 |
|
||||||
|
| Currency | 15 |
|
||||||
|
| Actionability | 15 |
|
||||||
|
|
||||||
|
Grades: A (90-100), B (70-89), C (50-69), D (30-49), F (0-29)
|
||||||
|
|
||||||
|
## Assessment Process
|
||||||
|
|
||||||
|
1. Read the CLAUDE.md file completely
|
||||||
|
2. Cross-reference with actual codebase (check commands, file refs, architecture)
|
||||||
|
3. Score each criterion independently using breakdown in analyzer-agent.md
|
||||||
|
4. Calculate total and assign grade
|
||||||
|
5. List specific issues found
|
||||||
|
6. Propose concrete improvements
|
||||||
|
|
||||||
|
See `agents/analyzer-agent.md` for detailed scoring breakdowns per criterion.
|
||||||
|
|
@ -0,0 +1,169 @@
|
||||||
|
# .claude/rules/ Directory Reference
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
The `.claude/rules/` directory allows modular organization of instructions with optional path scoping.
|
||||||
|
|
||||||
|
## Location
|
||||||
|
|
||||||
|
```
|
||||||
|
project/
|
||||||
|
├── .claude/
|
||||||
|
│ └── rules/
|
||||||
|
│ ├── code-style.md
|
||||||
|
│ ├── testing.md
|
||||||
|
│ └── api.md
|
||||||
|
└── CLAUDE.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Format
|
||||||
|
|
||||||
|
Each rule file is a markdown file with optional frontmatter:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
paths: ["src/**/*.ts", "src/**/*.tsx"]
|
||||||
|
description: TypeScript code style rules
|
||||||
|
---
|
||||||
|
|
||||||
|
# TypeScript Rules
|
||||||
|
|
||||||
|
## Formatting
|
||||||
|
- Use 2-space indentation
|
||||||
|
- Prefer single quotes
|
||||||
|
|
||||||
|
## Types
|
||||||
|
- Always use explicit types for function parameters
|
||||||
|
- Avoid `any` type
|
||||||
|
```
|
||||||
|
|
||||||
|
## Frontmatter Options
|
||||||
|
|
||||||
|
### paths (Official) / globs (Legacy)
|
||||||
|
|
||||||
|
Array of glob patterns that scope when this rule applies.
|
||||||
|
|
||||||
|
**Official field name:** `paths:` (as per Claude Code documentation)
|
||||||
|
**Legacy/alternative:** `globs:` (also supported for backwards compatibility)
|
||||||
|
|
||||||
|
Both fields behave identically - use `paths:` for new rules:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
paths: ["src/**/*.ts"] # Official - Only for TypeScript in src/
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
globs: ["src/**/*.ts"] # Legacy - Still works, but prefer paths:
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
paths: ["tests/**/*", "**/*.test.ts"] # Test files anywhere
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
If no paths/globs specified, rule applies everywhere.
|
||||||
|
|
||||||
|
**Note:** Config-audit normalizes both to `patterns` internally and tracks which field was used via `pattern_source`.
|
||||||
|
|
||||||
|
### description
|
||||||
|
|
||||||
|
Brief description of what the rule covers:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
description: Code formatting and style preferences
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
### alwaysApply
|
||||||
|
|
||||||
|
Force rule to always be included regardless of current file:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
alwaysApply: true
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Loading Behavior
|
||||||
|
|
||||||
|
1. Rules are loaded when entering relevant directories
|
||||||
|
2. Glob patterns are matched against current file/directory
|
||||||
|
3. Matching rules are included in context
|
||||||
|
4. Non-matching rules are not loaded
|
||||||
|
|
||||||
|
## Example Rules
|
||||||
|
|
||||||
|
### code-style.md
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
paths: ["src/**/*"]
|
||||||
|
description: Source code style
|
||||||
|
---
|
||||||
|
|
||||||
|
# Code Style
|
||||||
|
|
||||||
|
- TypeScript > JavaScript
|
||||||
|
- Explicit types for public API
|
||||||
|
- Document exported functions
|
||||||
|
```
|
||||||
|
|
||||||
|
### testing.md
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
paths: ["tests/**/*", "**/*.test.ts", "**/*.spec.ts"]
|
||||||
|
description: Testing guidelines
|
||||||
|
---
|
||||||
|
|
||||||
|
# Testing
|
||||||
|
|
||||||
|
- Use Jest for unit tests
|
||||||
|
- Descriptive test names
|
||||||
|
- Arrange-Act-Assert pattern
|
||||||
|
```
|
||||||
|
|
||||||
|
### api.md
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
paths: ["src/api/**/*", "src/routes/**/*"]
|
||||||
|
description: API development rules
|
||||||
|
---
|
||||||
|
|
||||||
|
# API Guidelines
|
||||||
|
|
||||||
|
- RESTful conventions
|
||||||
|
- Validate all inputs
|
||||||
|
- Consistent error responses
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Split by concern**: One rule file per topic
|
||||||
|
2. **Use specific globs**: Avoid overly broad patterns
|
||||||
|
3. **Keep rules focused**: 200-500 words per file
|
||||||
|
4. **Document purpose**: Use description frontmatter
|
||||||
|
5. **Review periodically**: Remove outdated rules
|
||||||
|
|
||||||
|
## Migration from CLAUDE.md
|
||||||
|
|
||||||
|
To convert from monolithic CLAUDE.md to rules:
|
||||||
|
|
||||||
|
1. Identify distinct sections in CLAUDE.md
|
||||||
|
2. Create rule file for each section
|
||||||
|
3. Add appropriate globs
|
||||||
|
4. Remove sections from CLAUDE.md
|
||||||
|
5. Test that rules load correctly
|
||||||
|
|
||||||
|
## Debugging
|
||||||
|
|
||||||
|
To see which rules are loaded:
|
||||||
|
- Check Claude Code logs
|
||||||
|
- Rules appear in context when relevant files are active
|
||||||
|
|
@ -0,0 +1,138 @@
|
||||||
|
# settings.json Schema Reference
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
| Location | Precedence | Purpose |
|
||||||
|
|----------|------------|---------|
|
||||||
|
| `~/.claude/settings.json` | Lowest | User defaults |
|
||||||
|
| `.claude/settings.json` | Medium | Project shared |
|
||||||
|
| `.claude/settings.local.json` | High | Project local |
|
||||||
|
| CLI arguments | Highest | Session only |
|
||||||
|
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
// Default model for the project
|
||||||
|
"model": "sonnet",
|
||||||
|
|
||||||
|
// Permission rules
|
||||||
|
"permissions": {
|
||||||
|
// Tools allowed without prompting
|
||||||
|
"allow": [
|
||||||
|
"Read",
|
||||||
|
"Write",
|
||||||
|
"Bash(npm*)",
|
||||||
|
"Bash(git*)"
|
||||||
|
],
|
||||||
|
// Tools that always require approval
|
||||||
|
"deny": [
|
||||||
|
"Bash(rm -rf*)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
|
||||||
|
// Environment variables to set
|
||||||
|
"env": {
|
||||||
|
"NODE_ENV": "development"
|
||||||
|
},
|
||||||
|
|
||||||
|
// Hooks configuration
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [...],
|
||||||
|
"PostToolUse": [...],
|
||||||
|
"Stop": [...]
|
||||||
|
},
|
||||||
|
|
||||||
|
// MCP server configuration (can also be in .mcp.json)
|
||||||
|
"mcpServers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@anthropic/mcp-server-filesystem"],
|
||||||
|
"env": {}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
// Custom agents path
|
||||||
|
"agents": "./agents",
|
||||||
|
|
||||||
|
// Plugins to load
|
||||||
|
"plugins": [
|
||||||
|
"~/plugins/my-plugin"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Settings
|
||||||
|
|
||||||
|
### model
|
||||||
|
|
||||||
|
Default model for this project/user:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"model": "sonnet" // or "opus", "haiku"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### permissions
|
||||||
|
|
||||||
|
Control tool access:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"permissions": {
|
||||||
|
"allow": [
|
||||||
|
"Read",
|
||||||
|
"Write",
|
||||||
|
"Bash(npm *)",
|
||||||
|
"Bash(git *)",
|
||||||
|
"Task"
|
||||||
|
],
|
||||||
|
"deny": [
|
||||||
|
"Bash(rm -rf *)",
|
||||||
|
"Bash(sudo *)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Patterns support wildcards:
|
||||||
|
- `*` matches any characters
|
||||||
|
- `Bash(npm*)` matches `npm install`, `npm test`, etc.
|
||||||
|
|
||||||
|
### env
|
||||||
|
|
||||||
|
Environment variables:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"env": {
|
||||||
|
"NODE_ENV": "development",
|
||||||
|
"DEBUG": "true"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### hooks
|
||||||
|
|
||||||
|
Event-driven automation:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"hooks": {
|
||||||
|
"PreToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Bash",
|
||||||
|
"command": "echo 'About to run bash'"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Merging Behavior
|
||||||
|
|
||||||
|
When multiple settings files exist:
|
||||||
|
- Objects are merged recursively
|
||||||
|
- Arrays are replaced, not merged
|
||||||
|
- Higher precedence wins for conflicts
|
||||||
124
plugins/config-audit/templates/feature-gap-report.html
Normal file
124
plugins/config-audit/templates/feature-gap-report.html
Normal file
|
|
@ -0,0 +1,124 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Config-Audit Feature Gap Report</title>
|
||||||
|
<style>
|
||||||
|
:root {
|
||||||
|
--green: #22c55e; --yellow: #eab308; --orange: #f97316; --red: #ef4444;
|
||||||
|
--blue: #3b82f6; --gray: #6b7280; --bg: #f8fafc; --card: #ffffff;
|
||||||
|
--border: #e2e8f0; --text: #1e293b; --muted: #64748b;
|
||||||
|
}
|
||||||
|
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||||
|
body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif; background: var(--bg); color: var(--text); line-height: 1.6; max-width: 900px; margin: 0 auto; padding: 2rem; }
|
||||||
|
h1 { font-size: 1.5rem; margin-bottom: 0.5rem; }
|
||||||
|
h2 { font-size: 1.2rem; margin: 2rem 0 1rem; border-bottom: 2px solid var(--border); padding-bottom: 0.5rem; }
|
||||||
|
.meta { color: var(--muted); font-size: 0.9rem; margin-bottom: 2rem; }
|
||||||
|
|
||||||
|
/* Summary cards */
|
||||||
|
.summary { display: grid; grid-template-columns: repeat(4, 1fr); gap: 1rem; margin-bottom: 2rem; }
|
||||||
|
.summary-card { background: var(--card); border: 1px solid var(--border); border-radius: 8px; padding: 1rem; text-align: center; }
|
||||||
|
.summary-card .label { font-size: 0.75rem; text-transform: uppercase; color: var(--muted); letter-spacing: 0.05em; }
|
||||||
|
.summary-card .value { font-size: 1.8rem; font-weight: 700; margin: 0.25rem 0; }
|
||||||
|
.summary-card .sub { font-size: 0.8rem; color: var(--muted); }
|
||||||
|
|
||||||
|
/* Grade badge */
|
||||||
|
.grade { display: inline-block; width: 2rem; height: 2rem; line-height: 2rem; text-align: center; border-radius: 4px; font-weight: 700; font-size: 0.9rem; color: white; }
|
||||||
|
.grade-A { background: var(--green); } .grade-B { background: #84cc16; }
|
||||||
|
.grade-C { background: var(--yellow); color: var(--text); } .grade-D { background: var(--orange); }
|
||||||
|
.grade-F { background: var(--red); }
|
||||||
|
|
||||||
|
/* Area table */
|
||||||
|
table { width: 100%; border-collapse: collapse; margin-bottom: 1.5rem; }
|
||||||
|
th, td { padding: 0.5rem 0.75rem; text-align: left; border-bottom: 1px solid var(--border); }
|
||||||
|
th { font-size: 0.75rem; text-transform: uppercase; color: var(--muted); letter-spacing: 0.05em; }
|
||||||
|
|
||||||
|
/* Progress bar */
|
||||||
|
.progress-container { width: 100%; background: #e2e8f0; border-radius: 4px; height: 8px; }
|
||||||
|
.progress-bar { height: 8px; border-radius: 4px; transition: width 0.3s; }
|
||||||
|
.progress-bar.green { background: var(--green); } .progress-bar.yellow { background: var(--yellow); }
|
||||||
|
.progress-bar.orange { background: var(--orange); } .progress-bar.red { background: var(--red); }
|
||||||
|
|
||||||
|
/* Gap cards */
|
||||||
|
.gap-card { background: var(--card); border: 1px solid var(--border); border-radius: 8px; padding: 1rem; margin-bottom: 0.75rem; }
|
||||||
|
.gap-header { display: flex; justify-content: space-between; align-items: center; margin-bottom: 0.5rem; }
|
||||||
|
.gap-title { font-weight: 600; }
|
||||||
|
.badge { display: inline-block; padding: 0.15rem 0.5rem; border-radius: 4px; font-size: 0.75rem; font-weight: 600; }
|
||||||
|
.badge-high { background: #fee2e2; color: #991b1b; } .badge-medium { background: #fef3c7; color: #92400e; }
|
||||||
|
.badge-low { background: #ecfdf5; color: #065f46; } .badge-info { background: #eff6ff; color: #1e40af; }
|
||||||
|
|
||||||
|
/* Actions */
|
||||||
|
.action-card { background: var(--card); border-left: 4px solid var(--blue); border-radius: 0 8px 8px 0; padding: 1rem; margin-bottom: 0.75rem; }
|
||||||
|
.action-number { font-size: 0.75rem; color: var(--blue); font-weight: 700; text-transform: uppercase; }
|
||||||
|
.action-title { font-weight: 600; margin: 0.25rem 0; }
|
||||||
|
.action-meta { font-size: 0.85rem; color: var(--muted); }
|
||||||
|
|
||||||
|
/* Level-up */
|
||||||
|
.level-path { display: flex; align-items: center; gap: 0.5rem; margin: 1rem 0; }
|
||||||
|
.level-node { padding: 0.5rem 1rem; border-radius: 8px; font-weight: 600; font-size: 0.9rem; }
|
||||||
|
.level-current { background: var(--blue); color: white; }
|
||||||
|
.level-next { background: var(--border); color: var(--text); border: 2px dashed var(--blue); }
|
||||||
|
.level-arrow { color: var(--muted); font-size: 1.2rem; }
|
||||||
|
|
||||||
|
.footer { margin-top: 3rem; padding-top: 1rem; border-top: 1px solid var(--border); color: var(--muted); font-size: 0.8rem; text-align: center; }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
|
||||||
|
<h1>Config-Audit Feature Gap Report</h1>
|
||||||
|
<div class="meta">{{DATE}} · Config-Audit v1.3.0</div>
|
||||||
|
|
||||||
|
<div class="summary">
|
||||||
|
<div class="summary-card">
|
||||||
|
<div class="label">Overall</div>
|
||||||
|
<div class="value"><span class="grade grade-{{OVERALL_GRADE}}">{{OVERALL_GRADE}}</span></div>
|
||||||
|
<div class="sub">{{OVERALL_SCORE}}/100</div>
|
||||||
|
</div>
|
||||||
|
<div class="summary-card">
|
||||||
|
<div class="label">Utilization</div>
|
||||||
|
<div class="value">{{UTILIZATION_SCORE}}%</div>
|
||||||
|
<div class="sub">{{OVERHANG_SCORE}}% overhang</div>
|
||||||
|
</div>
|
||||||
|
<div class="summary-card">
|
||||||
|
<div class="label">Maturity</div>
|
||||||
|
<div class="value">L{{MATURITY_LEVEL}}</div>
|
||||||
|
<div class="sub">{{MATURITY_NAME}}</div>
|
||||||
|
</div>
|
||||||
|
<div class="summary-card">
|
||||||
|
<div class="label">Segment</div>
|
||||||
|
<div class="value" style="font-size: 1.2rem;">{{SEGMENT}}</div>
|
||||||
|
<div class="sub">{{SEGMENT_DESC}}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<h2>Area Scores</h2>
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr><th>Area</th><th>Grade</th><th>Score</th><th>Progress</th><th>Findings</th></tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
{{AREA_ROWS}}
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<h2>Gap Analysis</h2>
|
||||||
|
{{GAP_CARDS}}
|
||||||
|
|
||||||
|
<h2>Next Best Actions</h2>
|
||||||
|
{{ACTION_CARDS}}
|
||||||
|
|
||||||
|
<h2>Level-Up Path</h2>
|
||||||
|
<div class="level-path">
|
||||||
|
<div class="level-node level-current">Level {{MATURITY_LEVEL}}: {{MATURITY_NAME}}</div>
|
||||||
|
<div class="level-arrow">→</div>
|
||||||
|
<div class="level-node level-next">Level {{NEXT_LEVEL}}: {{NEXT_LEVEL_NAME}}</div>
|
||||||
|
</div>
|
||||||
|
<p>{{LEVEL_UP_REQUIREMENTS}}</p>
|
||||||
|
|
||||||
|
<div class="footer">
|
||||||
|
Generated by Config-Audit Plugin · Claude Code Configuration Intelligence
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
3
plugins/config-audit/tests/fixtures/broken-plugin/.claude-plugin/plugin.json
vendored
Normal file
3
plugins/config-audit/tests/fixtures/broken-plugin/.claude-plugin/plugin.json
vendored
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
"name": "broken-plugin"
|
||||||
|
}
|
||||||
8
plugins/config-audit/tests/fixtures/broken-plugin/agents/bad-agent.md
vendored
Normal file
8
plugins/config-audit/tests/fixtures/broken-plugin/agents/bad-agent.md
vendored
Normal file
|
|
@ -0,0 +1,8 @@
|
||||||
|
---
|
||||||
|
name: bad-agent
|
||||||
|
description: Missing model and tools
|
||||||
|
---
|
||||||
|
|
||||||
|
# Bad Agent
|
||||||
|
|
||||||
|
No model or tools in frontmatter.
|
||||||
3
plugins/config-audit/tests/fixtures/broken-plugin/commands/no-frontmatter.md
vendored
Normal file
3
plugins/config-audit/tests/fixtures/broken-plugin/commands/no-frontmatter.md
vendored
Normal file
|
|
@ -0,0 +1,3 @@
|
||||||
|
# A command without frontmatter
|
||||||
|
|
||||||
|
This command has no YAML frontmatter.
|
||||||
60
plugins/config-audit/tests/fixtures/broken-project/.claude/rules/big-unscoped.md
vendored
Normal file
60
plugins/config-audit/tests/fixtures/broken-project/.claude/rules/big-unscoped.md
vendored
Normal file
|
|
@ -0,0 +1,60 @@
|
||||||
|
Coding Standards and Best Practices
|
||||||
|
|
||||||
|
All code must be reviewed before merging to the main branch.
|
||||||
|
Every function must have a clear, single responsibility.
|
||||||
|
Variable names must be descriptive and follow camelCase convention.
|
||||||
|
Constants must be named in UPPER_SNAKE_CASE.
|
||||||
|
Avoid magic numbers; use named constants instead.
|
||||||
|
Keep line length under 120 characters.
|
||||||
|
Use four spaces for indentation, never tabs.
|
||||||
|
Files must end with a newline character.
|
||||||
|
Remove trailing whitespace from all lines.
|
||||||
|
Do not commit commented-out code.
|
||||||
|
Delete dead code instead of leaving it in place.
|
||||||
|
Write self-documenting code; comments explain why, not what.
|
||||||
|
All TODO comments must reference a ticket number.
|
||||||
|
Do not use abbreviations that are not widely understood.
|
||||||
|
Use positive variable names; prefer isActive over isNotInactive.
|
||||||
|
Avoid double negatives in conditional expressions.
|
||||||
|
Keep nesting levels to a maximum of three.
|
||||||
|
Extract complex conditions into named boolean variables.
|
||||||
|
Use early returns to reduce nesting.
|
||||||
|
Avoid else after return.
|
||||||
|
Keep functions under 40 lines of code.
|
||||||
|
Keep files under 300 lines of code.
|
||||||
|
Split large files into smaller, focused modules.
|
||||||
|
Use named exports, not default exports.
|
||||||
|
Group imports: standard library, external, internal.
|
||||||
|
Sort import groups alphabetically.
|
||||||
|
Do not use wildcard imports.
|
||||||
|
Remove unused imports before committing.
|
||||||
|
Use absolute imports for cross-module dependencies.
|
||||||
|
Use relative imports only within the same module.
|
||||||
|
Avoid circular dependencies between modules.
|
||||||
|
Use barrel files only at module boundaries.
|
||||||
|
Do not re-export from multiple barrel files.
|
||||||
|
Prefer named interfaces over inline type definitions.
|
||||||
|
Use generic types to avoid duplication.
|
||||||
|
Avoid type assertions unless absolutely necessary.
|
||||||
|
Do not use ts-ignore comments without explanation.
|
||||||
|
Enable strict mode in tsconfig.
|
||||||
|
Use unknown instead of any for unsafe types.
|
||||||
|
Prefer type narrowing over type assertions.
|
||||||
|
Use discriminated unions for complex state.
|
||||||
|
Model optional fields explicitly with undefined.
|
||||||
|
Avoid null; prefer undefined.
|
||||||
|
Use optional chaining for nullable access.
|
||||||
|
Use nullish coalescing for defaults.
|
||||||
|
Do not mix null and undefined in the same API.
|
||||||
|
Use enums for finite sets of values.
|
||||||
|
Prefer const enums for performance-sensitive code.
|
||||||
|
Do not extend enums dynamically.
|
||||||
|
Use readonly arrays and objects where mutation is unintended.
|
||||||
|
Prefer immutable data structures in shared state.
|
||||||
|
Avoid mutations in pure functions.
|
||||||
|
Use spread operators for shallow copies.
|
||||||
|
Use structuredClone for deep copies.
|
||||||
|
Do not mutate function parameters.
|
||||||
|
Return new objects from transformation functions.
|
||||||
|
Use Array methods over imperative loops where readable.
|
||||||
|
Avoid side effects in map and filter callbacks.
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue