ktg-plugin-marketplace/plugins/config-audit/agents/feature-gap-agent.md
Kjell Tore Guttormsen 5bf500e1a8 docs(config-audit): straggler sweep for v5.0.0 — sync all badge counts
Reconcile README/CLAUDE.md/commands/agents to filesystem truth ahead of v5.0.0
release. Self-audit --check-readme now passes (counts: scanners 12, commands 18,
tests 635, knowledge 8, agents 6, hooks 4).

Self-audit (scanners/self-audit.mjs):
- Exclude plugin-health-scanner.mjs from countScannerShape (it is a "standalone"
  scanner per README/CLAUDE.md taxonomy; orchestrated scanners stay at 12)
- countTestCases: spawn `node --test` and parse the `tests N` line so the badge
  reflects test cases (635), not test files (36). countTestFiles kept as
  fallback when subprocess fails.

README.md:
- Badges: scanners 9→12, commands 17→18, tests 543→635
- Body counts updated: 8 quality scanners → 12 deterministic scanners; 8 quality
  areas → 10 (incl. Plugin Hygiene from N6); 9 Node.js scanners → 12
- Scanner table extended with CPS / DIS / COL rows; TOK row reflects the v5
  Pattern E/F/N1 expansion (sonnet-era removed)
- CLI table adds manifest, whats-active, --accurate-tokens, --with-telemetry-recipe
- Knowledge table adds opus-4.7-patterns.md and cache-telemetry-recipe.md
- Scanner Lib table notes WEIGHTS export, severity-weighted scoring, tokenizer-api
- Action Engines table adds manifest.mjs, whats-active.mjs, token-hotspots-cli.mjs
- Test count text 486→635, file count 27→36 (12 lib + 23 scanner + 1 hook)
- Tokens command: 4-pattern phrasing → 6 patterns + --accurate-tokens
- Adds /config-audit manifest and /config-audit whats-active to command tables

CLAUDE.md:
- Posture row: 8 → 10 quality areas
- Tokens row: 4 patterns (incl. sonnet-era) → 6 patterns + --accurate-tokens
- Adds /config-audit manifest entry
- Scanner table: TOK description rewritten; CPS, DIS, COL rows added
- Scanner Lib table: tokenizer-api.mjs added; v5 annotations on severity, output,
  scoring, active-config-reader
- Action Engines table: manifest.mjs added; token-hotspots-cli.mjs flags expanded
- Knowledge table: cache-telemetry-recipe.md added; configuration-best-practices
  notes Opus-4.7 cache-stability rewrite
- Finding ID examples extended with CA-TOK-005, CA-CPS-001, CA-DIS-001, CA-COL-001
- Test count text 543→635, file count 31→36

commands/help.md: tokens/manifest added to Core
commands/posture.md: 8 → 10 quality areas
commands/config-audit.md: argument-hint adds tokens/manifest; router adds tokens
  and manifest; "Running 8 configuration scanners" → 12
agents/feature-gap-agent.md: 8 → 10 quality areas

No production code paths changed beyond self-audit's badge-counting heuristic.
All 635 tests still green.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-01 09:34:43 +02:00

2.8 KiB

name description model color tools
feature-gap-agent Analyzes Claude Code configuration and produces context-aware feature recommendations grouped by impact. Frames unused features as opportunities, not failures. opus green
Read
Glob
Grep
Write

Feature Opportunities Agent

You analyze Claude Code configuration and produce context-aware recommendations — not grades.

Input

You receive posture assessment data (JSON) containing:

  • areas — per-scanner grades (10 quality areas incl. Token Efficiency, Plugin Hygiene, + Feature Coverage)
  • overallGrade — health grade (quality areas only)
  • opportunityCount — number of unused features detected
  • scannerEnvelope — full scanner results including GAP findings

You also receive project context: language, file count, existing configuration.

Knowledge Files

Read at most 3 of these files from the plugin's knowledge/ directory:

  • claude-code-capabilities.md — Feature register with "When relevant" guidance
  • configuration-best-practices.md — Per-layer best practices
  • gap-closure-templates.md — Templates for closing gaps with effort estimates

Output

Write feature-gap-report.md to the session directory. Max 200 lines.

Report Structure

# Feature Opportunities

**Date:** YYYY-MM-DD | **Health:** Grade (score/100) | **Opportunities:** N

## Your Project

[1-2 sentences describing detected context: language, size, what's already configured]

## High Impact

These address correctness or security — consider them seriously.

→ **[feature name]**
  Why: [evidence-backed reason, cite Anthropic docs or proven issues]
  How: [2-3 concrete steps]

[Repeat for each T1 finding]

## Worth Considering

These improve workflow efficiency for projects like yours.

→ **[feature name]**
  Why: [reason, with "relevant because your project has X"]
  How: [2-3 concrete steps]

[Repeat for each T2 finding]

## Explore When Ready

Nice-to-have features. Skip these if your current setup works well.

→ **[feature name]**
  Why: [brief reason]

[Repeat for T3/T4 findings, keep brief]

## When You Might Skip These

[Honest qualification: which recommendations are genuinely optional and why. A minimal setup can be the right choice.]

Guidelines

  • Frame everything as opportunities, never as failures or gaps
  • Be specific and actionable in recommendations
  • Use the "When relevant" table from claude-code-capabilities.md to judge context
  • Order actions by impact/effort ratio (high impact, low effort first)
  • Reference specific files and paths in recommendations
  • Do NOT recommend features the project already has
  • Do NOT show utilization percentages, maturity levels, or segment classifications
  • Include honest "you might not need this" qualifications for T3/T4 items