ktg-plugin-marketplace/plugins/config-audit/commands/feature-gap.md
Kjell Tore Guttormsen 79b6e29073 feat(humanizer): update audit/analysis command templates group A [skip-docs]
Wave 5 Step 13. Threads the humanizer vocabulary through five audit/
analysis command templates and adds a shape test that locks the
structure in place.

- commands/posture.md, tokens.md, feature-gap.md (findings-renderers):
  reference userImpactCategory/userActionLanguage/relevanceContext;
  remove hardcoded A/B/C/D/F-to-prose tables (humanizer owns the
  grade-context vocabulary now via the stderr scorecard headline).
- commands/manifest.md, whats-active.md (inventory CLIs): add --raw
  pass-through for CLI-surface consistency. --raw is a no-op in these
  CLIs, but the flag is threaded through so users get uniform behaviour.
- All five files: --raw flag parsed from $ARGUMENTS and passed verbatim
  to the underlying scanner CLI when present.

tests/commands/group-a-shape.test.mjs (new, +5 tests, 767 → 772):
  - structural: every file has a bash invocation block, Read tool
    reference, and --raw/$ARGUMENTS plumbing
  - findings-renderers only: at least one humanized field referenced;
    no hardcoded "[grade] grade is..." prose tables
2026-05-01 19:41:08 +02:00

7.5 KiB

name description argument-hint allowed-tools model
config-audit:feature-gap Context-aware feature recommendations — what could enhance your setup and why [path] Read, Write, Edit, Glob, Grep, Bash, Agent, AskUserQuestion opus

Config-Audit: Feature Opportunities

Context-aware analysis of Claude Code features that could benefit your specific project — with the option to implement selected recommendations on the spot.

What the user gets

  • Project context detection (language, size, existing configuration)
  • Numbered recommendations grouped by impact (high / worth considering / explore)
  • Each recommendation backed by evidence (Anthropic docs, proven issues)
  • Interactive selection: "Which would you like to implement?"
  • Direct implementation with backup for selected items

Implementation

Step 1: Determine target and flags

Split $ARGUMENTS into a path and flags. Path is the first non-flag argument (default: current working directory). Recognized flags:

  • --raw — pass-through to the scanner; produces v5.0.0 verbatim envelope (bypasses the humanizer). When --raw is set, render with v5.0.0 finding-field shape only — humanizer fields are absent in raw output.

Tell the user:

## Feature Opportunities

Analyzing which Claude Code features could benefit your workflow...

Step 2: Create session and run posture

Generate session ID (YYYYMMDD_HHmmss) if no active session exists.

mkdir -p ~/.claude/config-audit/sessions/{session-id}/findings 2>/dev/null
RAW_FLAG=""
if echo "$ARGUMENTS" | grep -q -- "--raw"; then RAW_FLAG="--raw"; fi
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --output-file ~/.claude/config-audit/sessions/{session-id}/posture.json $RAW_FLAG 2>/dev/null; echo $?

If exit code is non-zero: "Assessment couldn't run. Check that the path exists and contains configuration files."

Step 3: Read posture data and detect project context

Read ~/.claude/config-audit/sessions/{session-id}/posture.json using the Read tool.

Extract GAP findings from scannerEnvelope.scanners (find scanner with scanner === 'GAP').

Detect project context:

test -f <target-path>/package.json && echo "has_package_json" || echo "no_package_json"
ls <target-path>/*.py <target-path>/requirements.txt <target-path>/pyproject.toml 2>/dev/null | head -3

Step 4: Build numbered recommendations

Read ${CLAUDE_PLUGIN_ROOT}/knowledge/gap-closure-templates.md for implementation templates.

Group GAP findings by their humanized fields rather than re-deriving tier-to-prose mappings. In default mode (no --raw) each finding carries:

  • userImpactCategory (e.g., "Missed opportunity") — the impact bucket
  • userActionLanguage (e.g., "Fix soon", "Fix when convenient", "Optional cleanup", "FYI") — the urgency phrasing the rest of the toolchain uses
  • relevanceContext ("affects-everyone" / "affects-this-machine-only" / "test-fixture-no-impact") — the scope so the user knows whether the change touches shared config or just their own machine

Group findings into three sections by userActionLanguage: "Fix this now" + "Fix soon" → High Impact, "Fix when convenient" → Worth Considering, "Optional cleanup" + "FYI" → Explore When Ready. Number sequentially across sections. Skip findings whose relevanceContext === "test-fixture-no-impact" unless the user explicitly asked to include fixtures.

The humanizer has already replaced jargon-heavy strings with plain-language equivalents in title, description, and recommendation — render those verbatim. Do not paraphrase. Do not introduce inline tier-to-prose tables ("Tier 1 means…"); the categories are pre-translated.

If --raw was passed, the v5.0.0 envelope is in effect — humanizer fields are absent. Fall back to grouping by category ("t1"/"t2"/"t3"/"t4") and render title + recommendation directly.

Render shape (default mode):

### High Impact

{For each finding where userActionLanguage is "Fix this now" or "Fix soon":}

**{N}.** {title}
  → {description}
  → {recommendation}
  → Effort: {from gap-closure-templates.md}

### Worth Considering

{For each finding where userActionLanguage is "Fix when convenient":}

**{N}.** {title}
  → {description}
  → {recommendation}

### Explore When Ready

{For each finding where userActionLanguage is "Optional cleanup" or "FYI":}

**{N}.** {title}
  → {recommendation}

Each recommendation MUST have:

  • A number
  • The humanizer-provided title
  • The humanizer-provided description (where shown)
  • An effort estimate looked up from the templates

Step 5: Ask what to implement

AskUserQuestion:
  question: "Which would you like to implement? I'll create a backup first."
  options:
    - "All high impact (1-2)"
    - "Pick specific: e.g. 1,3,5"
    - "None — just wanted to see the recommendations"

If "None": show the full report location and exit.

If the user picks numbers: parse the selection and proceed to Step 6.

Step 6: Implement selected recommendations

For each selected recommendation:

  1. Create backup of any files that will be modified:
node ${CLAUDE_PLUGIN_ROOT}/scanners/fix-cli.mjs <target-path> --json 2>/dev/null

Or create manual backup:

mkdir -p ~/.claude/config-audit/backups/$(date +%Y%m%d_%H%M%S)/files/ 2>/dev/null

Copy each file that will be touched.

  1. Apply the template from gap-closure-templates.md. Use the Write or Edit tool to create or modify the relevant configuration file.

  2. Show progress as each item is done:

Implementing 3 recommendations...

✓ 1. permissions.deny — added to .claude/settings.json
✓ 3. Modular CLAUDE.md — created .claude/rules/testing.md, added @import
✓ 5. Keybindings — created ~/.claude/keybindings.json
  1. Verify by re-running posture:
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file /tmp/config-audit-verify-$$.json 2>/dev/null

Step 7: Show results

### Done

**{N} recommendations implemented** | Backup created

{If health grade changed:}
Health: {old_grade} → {new_grade} (+{delta} points)

{Show remaining opportunities if any:}
{remaining} more opportunities available — run `/config-audit feature-gap` again anytime.

**Rollback:** If anything looks wrong, run `/config-audit rollback` to restore.

Implementation Guidelines

When implementing recommendations, be smart about context:

  • permissions.deny: Look at the project for common sensitive paths (.env, secrets/, .git/config, *.pem). Don't just copy a template blindly — check what actually exists.
  • hooks: Start with a simple, useful hook. Don't scaffold 5 hooks at once.
  • path-scoped rules: Look at the project's file structure to determine meaningful scopes (e.g., tests/**/*.ts vs src/**/*.ts).
  • CLAUDE.md modularization: Only suggest splitting if the file is over 100 lines. Read it first to find natural section boundaries.
  • MCP setup: Only relevant if the user actually has external tools to connect. Ask before creating.
  • Custom plugin: Too complex for inline implementation — suggest /config-audit plan instead.

For items that genuinely need user input (e.g., "which MCP servers do you use?"), ask briefly during implementation rather than skipping them.

Safety

  • Backup mandatory — always create before modifying
  • Show what's changing — the user sees each change as it happens
  • Rollback available/config-audit rollback at any time
  • Non-destructive — only create new files or add to existing; never delete content