ktg-plugin-marketplace/plugins/config-audit/commands/posture.md
Kjell Tore Guttormsen 79b6e29073 feat(humanizer): update audit/analysis command templates group A [skip-docs]
Wave 5 Step 13. Threads the humanizer vocabulary through five audit/
analysis command templates and adds a shape test that locks the
structure in place.

- commands/posture.md, tokens.md, feature-gap.md (findings-renderers):
  reference userImpactCategory/userActionLanguage/relevanceContext;
  remove hardcoded A/B/C/D/F-to-prose tables (humanizer owns the
  grade-context vocabulary now via the stderr scorecard headline).
- commands/manifest.md, whats-active.md (inventory CLIs): add --raw
  pass-through for CLI-surface consistency. --raw is a no-op in these
  CLIs, but the flag is threaded through so users get uniform behaviour.
- All five files: --raw flag parsed from $ARGUMENTS and passed verbatim
  to the underlying scanner CLI when present.

tests/commands/group-a-shape.test.mjs (new, +5 tests, 767 → 772):
  - structural: every file has a bash invocation block, Read tool
    reference, and --raw/$ARGUMENTS plumbing
  - findings-renderers only: at least one humanized field referenced;
    no hardcoded "[grade] grade is..." prose tables
2026-05-01 19:41:08 +02:00

5.3 KiB

name description argument-hint allowed-tools model
config-audit:posture Quick configuration health assessment — scorecard with A-F grades [path] [--drift] [--plugin-health] Read, Write, Glob, Grep, Bash sonnet

Config-Audit: Health Assessment

Quick, deterministic configuration health scorecard. No agents needed — runs all scanners + scoring in one pass.

What the user gets

  • Health grade (A-F) with plain-language explanation
  • Per-area breakdown for 10 quality areas (incl. Token Efficiency, Plugin Hygiene) with grades and actionable notes
  • Opportunity count — how many features could enhance their setup (not a grade)
  • Grade-appropriate next steps

Implementation

Step 1: Determine target and flags

Split $ARGUMENTS into a path and flags. Path is the first non-flag argument (default: current working directory). Resolve relative paths. Recognized flags:

  • --raw — pass-through to the scanner; produces v5.0.0 verbatim output (bypasses the humanizer). Power-user mode for byte-stable diffs and machine consumption.
  • --drift — append a "Configuration Drift" section (see Step 5).
  • --plugin-health — append a "Plugin Health" section (see Step 5).

Tell the user:

## Configuration Health

Running quick assessment{if path != cwd: " on `{path}`"}...

Step 2: Run posture scanner

Run silently — JSON goes to a file, the humanized scorecard prints to stderr (default mode). The humanized stderr scorecard already includes the grade headline and area-score lines in plain language, so render those directly rather than re-deriving prose tables.

RAW_FLAG=""
if echo "$ARGUMENTS" | grep -q -- "--raw"; then RAW_FLAG="--raw"; fi
node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --output-file /tmp/config-audit-posture-$$.json $RAW_FLAG 2>/tmp/config-audit-posture-stderr-$$.txt; echo $?

If exit code is non-zero, tell the user: "Assessment couldn't complete. Check that the path exists and contains Claude Code configuration files."

If --raw was passed, treat the captured stderr as v5.0.0-shape verbatim text and present it as-is in a code block; skip the humanized rendering steps below.

Step 3: Read and interpret results

Read the JSON output file using the Read tool. Extract:

  • overallGrade, opportunityCount
  • areas[] — each with name, grade, score, findingCount
  • scannerEnvelope.scanners[].findings[] — when surfacing individual findings, prefer the humanizer-provided fields: userImpactCategory (e.g., "Configuration mistake", "Wasted tokens"), userActionLanguage (e.g., "Fix this now", "Fix soon", "Optional cleanup"), and relevanceContext ("affects-everyone", "affects-this-machine-only", "test-fixture-no-impact"). These let you group and prioritize without hardcoded severity-to-prose mappings.

Also Read the captured stderr file — its body is the humanized scorecard (grade headline, area-score block, opportunity hint). You can present it verbatim or interleave its lines with the JSON-driven table.

Step 4: Present the scorecard

**Health: {overallGrade}** | {qualityAreaCount} areas scanned

{Use the headline line from the humanized stderr scorecard — it carries grade-context prose already (e.g., " Health: A (97/100) — Healthy setup, only minor polish needed"). Do not re-derive an A/B/C/D prose table here; the humanizer owns that vocabulary.}

### Area Scores

| Area | Grade | Score | Findings | |
|------|-------|-------|----------|-|
{for each area EXCEPT Feature Coverage:}
| {name} | {grade} | {score}/100 | {findingCount} | {plain-language note: A="Excellent", B="Good", C="Needs work", D/F="Issues found"} |

{if opportunityCount > 0:}
{opportunityCount} feature opportunities available — run `/config-audit feature-gap` for context-aware recommendations.

### What's next

Group "what's next" suggestions by userActionLanguage from the humanized findings:

  • Findings tagged "Fix this now" / "Fix soon" → suggest /config-audit fix first, then /config-audit plan.
  • Findings tagged "Fix when convenient" / "Optional cleanup" → suggest /config-audit feature-gap and routine maintenance.
  • No high-urgency findings → suggest /config-audit feature-gap for opportunities and re-running posture after major config changes.

Avoid hardcoded grade-to-prose ladders here — the humanized scorecard headline already supplies grade context, and userActionLanguage supplies finding-level urgency.

Step 5: Optional sections

If --drift flag is present:

Run drift comparison silently:

node ${CLAUDE_PLUGIN_ROOT}/scanners/drift-cli.mjs <target-path> 2>/dev/null

Read stdout output and append a "Configuration Drift" section showing what changed since the last baseline.

If --plugin-health flag is present:

Run plugin health scanner silently:

node ${CLAUDE_PLUGIN_ROOT}/scanners/plugin-health-scanner.mjs <target-path> 2>/dev/null

Read stdout output and append a "Plugin Health" section.

If both flags: Use scanners/lib/report-generator.mjs to produce a unified markdown report.

Step 6: Save to session (if active)

If a config-audit session exists, save results:

node ${CLAUDE_PLUGIN_ROOT}/scanners/posture.mjs <target-path> --json --output-file ~/.claude/config-audit/sessions/<session-id>/posture.json 2>/dev/null