Wave 5 Step 13. Threads the humanizer vocabulary through five audit/
analysis command templates and adds a shape test that locks the
structure in place.
- commands/posture.md, tokens.md, feature-gap.md (findings-renderers):
reference userImpactCategory/userActionLanguage/relevanceContext;
remove hardcoded A/B/C/D/F-to-prose tables (humanizer owns the
grade-context vocabulary now via the stderr scorecard headline).
- commands/manifest.md, whats-active.md (inventory CLIs): add --raw
pass-through for CLI-surface consistency. --raw is a no-op in these
CLIs, but the flag is threaded through so users get uniform behaviour.
- All five files: --raw flag parsed from $ARGUMENTS and passed verbatim
to the underlying scanner CLI when present.
tests/commands/group-a-shape.test.mjs (new, +5 tests, 767 → 772):
- structural: every file has a bash invocation block, Read tool
reference, and --raw/$ARGUMENTS plumbing
- findings-renderers only: at least one humanized field referenced;
no hardcoded "[grade] grade is..." prose tables
3.3 KiB
3.3 KiB
| name | description | argument-hint | allowed-tools | model |
|---|---|---|---|---|
| config-audit:manifest | Show ranked token-source manifest — every CLAUDE.md, plugin, skill, MCP server, and hook ordered DESC by estimated tokens | [path] [--json] | Read, Bash | sonnet |
Config-Audit: Manifest
Produce a ranked, single-table view of every token source loaded for a given repo path. Where whats-active shows separate tables per category, manifest collapses everything into one ordered list — making it easy to see what's costing the most regardless of category.
UX Rules (MANDATORY — from .claude/rules/ux-rules.md)
- Never show raw JSON or stderr output. Always use
--output-file+2>/dev/null. - Narrate before acting. Tell the user what you're about to do.
- Read, don't dump. Read the JSON file and render a formatted table.
- End with context-sensitive next steps.
Implementation
Step 1: Parse $ARGUMENTS
First non-flag argument is the path (default .). Recognized flags:
--json— emit raw JSON instead of the rendered table.--raw— pass-through to the scanner; accepted for CLI surface consistency with the other config-audit commands. The manifest CLI is data-table only (no findings prose), so--rawis a no-op here, but the flag is still threaded through so users get uniform behaviour across--raw.
Step 2: Run the CLI silently
Tell the user: "Building token-source manifest for <path>..."
TMPFILE="/tmp/ca-manifest-$$.json"
RAW_FLAG=""
if echo "$ARGUMENTS" | grep -q -- "--raw"; then RAW_FLAG="--raw"; fi
node ${CLAUDE_PLUGIN_ROOT}/scanners/manifest.mjs <path> --output-file "$TMPFILE" $RAW_FLAG 2>/dev/null; echo $?
Exit code handling:
0→ continue3→ tell user: "Couldn't read configuration. Check that the path exists and is a directory." Stop.
Step 3: If --json was requested, cat the file and stop
cat "$TMPFILE"
Do NOT render the table in JSON mode.
Step 4: Read JSON and render
Use the Read tool on $TMPFILE. Extract meta.repoPath, total, and sources[]. Render the top 20 sources (or fewer if the manifest is shorter):
**Token-source manifest for `<repoPath>`** — ~{total} tokens at startup
| Rank | Kind | Name | Source | Tokens |
|------|------|------|--------|--------|
| 1 | {kind} | `<name>` | {source} | ~{estimated_tokens} |
| ... | ... | ... | ... | ... |
_Estimates assume ~4 chars/token (Claude ballpark). Real token count varies ±15%._
If sources.length > 20, follow the table with: "Showing top 20 of {N} sources. Run with --json to see the full list."
Step 5: Suggest next steps
**Next steps:**
- `/config-audit tokens` — Opus-4.7 token-hotspot patterns (cache-breaking, redundant perms, deep imports, MCP budget)
- `/config-audit whats-active` — same data grouped by category, with disable suggestions
- `/config-audit feature-gap` — what *could* improve here, grouped by impact
Tone:
- High total (>50k): empathetic — "That's a heavy startup cost; tokens bullet anything you'd otherwise spend on the actual conversation."
- Moderate (10–50k): neutral — "Reasonable. Skim the top 5 to see if anything is unexpectedly large."
- Low (<10k): encouraging — "Tight setup. The model has plenty of room for the actual work."