ktg-plugin-marketplace/plugins/voyage/commands/trekplan.md
Kjell Tore Guttormsen 2e0892cdaf chore(voyage): release v5.0.1 — drop standalone HTML render; print literal /playground document-critique invocation
The v5.0.0 stop-gap had /trekbrief, /trekplan, and /trekreview each render
a read-only {artifact}.html (via scripts/render-artifact.mjs) AND print a
vague "run the /playground plugin" instruction. In practice the read-only
HTML was redundant with what /playground produces and the instruction
wasn't copy-paste-ready — the operator had to guess the right invocation.

v5.0.1 deletes scripts/render-artifact.mjs + its test + npm run render,
and makes each producing command end with a single boxed, literal,
copy-paste-ready line:

    /playground build a document-critique playground for {artifact_path}

One paste from the operator launches the official playground skill's
document-critique template, which builds an interactive HTML — artifact
on the left, per-line Approve/Reject/Comment cards on the right, Copy
Prompt button at the bottom. Mark suggestions, click Copy Prompt, paste
back, Claude revises the .md. Doc-consistency test pins the literal
invocation so the prose cannot soften back into vagueness.

npm test green: 503 tests, 501 pass, 0 fail, 2 skipped.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-13 13:24:32 +02:00

37 KiB
Raw Blame History

name description argument-hint model allowed-tools
trekplan Deep implementation planning from a task brief. Requires --brief or --project. Runs parallel specialized agents, optional external research, and adversarial review. --brief <path> | --project <dir> [--fg | --quick | --research <brief> | --decompose <plan> | --export <fmt> <plan>] opus Agent, Read, Glob, Grep, Write, Edit, Bash, AskUserQuestion, TaskCreate, TaskUpdate, TeamCreate, TeamDelete

Voyage Local v2.0

Deep, multi-phase implementation planning driven by a task brief. Planning consumes the brief (produced by /trekbrief) and any research briefs referenced in it, then runs specialized exploration agents, synthesis, and adversarial review to produce an executable plan.

v2.0 is a breaking release. The interview phase has been extracted into /trekbrief. This command no longer accepts free-text task descriptions — it requires either --brief <path> or --project <dir>.

Pipeline position:

/trekbrief     →  brief.md
/trekresearch  →  research/*.md
/trekplan      →  plan.md            (this command)
/trekexecute   →  execution

Phase 1 — Parse mode and validate input

Parse $ARGUMENTS for mode flags. Order of precedence:

  1. --export <format> <plan-path> — extract {format} (first token after --export) and {plan-path} (remainder). Valid formats: pr, issue, markdown, headless. Set mode = export.

    If format is not in the valid set:

    Error: unknown export format '{format}'. Valid: pr, issue, markdown, headless
    

    If the plan file does not exist:

    Error: plan file not found: {path}
    
  2. --decompose <plan-path> — extract the plan path. Set mode = decompose. If the plan file does not exist:

    Error: plan file not found: {path}
    
  3. --project <dir> — extract the project directory path.

    • Resolve {dir} (trim trailing slash).
    • Derive implicit flags:
      • --brief {dir}/brief.md
      • Plan destination: {dir}/plan.md
      • Research briefs auto-discovered from {dir}/research/*.md (sorted).
    • If {dir} does not exist or {dir}/brief.md is missing:
      Error: project directory not initialized. Run /trekbrief to create it.
      Missing: {dir}/brief.md
      
    • Set project_dir = {dir}, brief_path = {dir}/brief.md.
    • Validate inputs (soft mode — warnings do not block, errors do):
      # Brief schema sanity check (frontmatter + state machine, soft on body sections)
      node ${CLAUDE_PLUGIN_ROOT}/lib/validators/brief-validator.mjs --soft --json "{dir}/brief.md"
      
      # Research briefs (if any) — drift-warn only, none of these block the run
      [ -d "{dir}/research" ] && \
        node ${CLAUDE_PLUGIN_ROOT}/lib/validators/research-validator.mjs --soft --dir "{dir}/research" --json
      
      # Architecture note discovery (EXTERNAL CONTRACT — drift-WARN, never drift-FAIL)
      node ${CLAUDE_PLUGIN_ROOT}/lib/validators/architecture-discovery.mjs --json "{dir}"
      
      Each call exits 0 on success or with a structured JSON error report on stderr. Surface any warnings in the user-facing summary at Phase 3, but do not abort.
    • Set has_research_brief = true if {dir}/research/*.md matches ≥ 1 file.
    • Read the architecture-discovery JSON output: set has_architecture_note = true if found == true. The discovery module emits warnings if the file lives at a non-canonical path (e.g. architecture-overview.md); preserve them for the user-facing summary. If set, architecture_note_path = {result.overview}. Produced by an external opt-in architect plugin (no longer publicly distributed; the filesystem slot remains available for any compatible producer). Missing file is fine — additive discovery, not required.
  4. --brief <path> — extract the brief path. If the file does not exist:

    Error: brief file not found: {path}
    

    Set brief_path = {path}. Plan destination will be derived in Phase 3 from the brief's slug and date (see Phase 3).

  5. --research <brief.md> [brief2.md] [brief3.md] — collect paths after --research until the next -- flag or a token that does not look like a file path. Maximum 3 briefs. Set has_research_brief = true. Validate each path exists — if any is missing:

    Error: research brief not found: {path}
    

    --research combines with --brief, --project, --fg, and --quick. When combined with --project, the explicit --research briefs are appended to the auto-discovered ones (deduplicated by path).

  6. --fg — accepted as a no-op alias for backwards compatibility. All phases always run in the main session as of v2.4.0.

  7. --quick — set mode = quick. Skip agent swarm; use lightweight Glob/Grep scan and go directly to planning + adversarial review.

  8. --gates — autonomy control. When present, set gates_mode = true. Pause for operator confirmation after Phase 5 (exploration), Phase 7 (synthesis), and Phase 9 (adversarial review). Default gates_mode = false lets phases flow continuously. The flag is consumed by the autonomy-gate state machine via the CLI shim: node ${CLAUDE_PLUGIN_ROOT}/lib/util/autonomy-gate.mjs --state X --event Y --gates {true|false}.

  9. If neither --brief nor --project is present after flag parsing, output usage and stop:

Usage: /trekplan --brief <path-to-brief.md>
       /trekplan --project <project-dir>
       /trekplan --brief <path> --research <research-brief.md>
       /trekplan --project <dir> --fg
       /trekplan --project <dir> --quick
       /trekplan --export <pr|issue|markdown|headless> <plan-path>
       /trekplan --decompose <plan-path>

A brief is required. Produce one with /trekbrief first.

Modes:
  --brief       Plan from a brief file (foreground, v2.4.0+)
  --project     Plan from a project directory (brief.md + research/ auto-resolved)
  --research    Add up to 3 extra research briefs as planning context
  --fg          No-op alias (foreground is the only mode as of v2.4.0)
  --quick       Skip exploration agent swarm; plan directly
  --export      Generate shareable output from an existing plan (no new planning)
  --decompose   Split an existing plan into self-contained headless sessions

Examples:
  /trekplan --project .claude/projects/2026-04-18-jwt-auth
  /trekplan --brief .claude/projects/2026-04-18-jwt-auth/brief.md
  /trekplan --project .claude/projects/2026-04-18-jwt-auth --research extra.md
  /trekplan --project .claude/projects/2026-04-18-jwt-auth --fg
  /trekplan --export pr .claude/plans/trekplan-2026-04-06-rate-limiting.md
  /trekplan --decompose .claude/plans/trekplan-2026-04-06-rate-limiting.md

Migrating from v1.x? See MIGRATION.md in this plugin. The old --spec flag
and free-text interview mode were removed in v2.0.

Do not continue past this step if no brief was provided.

Read the brief

Read the brief file and parse its frontmatter. Extract:

  • task — one-line task description
  • slug — slug for plan filenames
  • project_dir — if present, overrides derived project path (optional)
  • research_topics — N (used as a sanity check)
  • research_statuspending | in_progress | complete | skipped

If research_status == pending and research_topics > 0:

  • Warn the user: "Brief declares {N} research topics but research is still pending. Plan confidence will be lower. Continue anyway?"
  • AskUserQuestion: Continue with low confidence / Cancel — run research first.
  • If cancel: print the research invocations from the brief's "How to continue" section and stop.

Report the detected mode:

Mode:    {foreground | quick | export | decompose}
Brief:   {brief_path}
Project: {project_dir or "-"}
Research: {N local briefs, M extra via --research}

When the input is type:trekreview (Handover 6)

The brief input may be a review.md produced by /trekreview instead of a brief.md produced by /trekbrief. Both files share the same handover slot — type is the discriminator.

If fm.type === 'trekreview':

  1. Skip the research_status gate above (review.md has no research_topics and no Research Plan section).
  2. Extract the findings array from the frontmatter — this is the list of 40-char hex finding-IDs the review surfaced.
  3. Read the body's last fenced json block to recover the full finding objects (the frontmatter only has IDs; the JSON has the severity, file, line, rule_key, title, detail, recommended_action payload).
  4. Filter findings to severity ∈ {BLOCKER, MAJOR}. MINOR and SUGGESTION are skipped for v1.0 plan-input — they are advisory only and would inflate the plan with low-priority churn.
  5. Treat each remaining finding as a plan goal:
    • recommended_action → step intent
    • file → primary Files: target
    • id → goes into the plan's source_findings: frontmatter list
  6. When writing plan.md, populate the frontmatter field source_findings: [<id1>, <id2>, ...] containing exactly the IDs of the BLOCKER + MAJOR findings consumed. The list provides the audit trail back to review.md.
  7. Use block-style YAML for the source_findings: list. The frontmatter parser at lib/util/frontmatter.mjs does not support flow-style arrays; source_findings: [a, b] is broken — use:
    source_findings:
      - 0123456789abcdef0123456789abcdef01234567
      - fedcba9876543210fedcba9876543210fedcba98
    

source_findings: is additive and optional — plans produced from a type: brief input simply omit the field. No plan_version bump is required for this addition (backwards compatible).

Phase 1.5 — Export (runs only when mode = export)

Skip this phase entirely unless mode = export.

Read the plan file. Extract these sections from the plan content:

  • Task description (from Context section)
  • Implementation steps (from Implementation Plan section)
  • Risks (from Risks and Mitigations section)
  • Test strategy (from Test Strategy section, if present)
  • Scope estimate (from Estimated Scope section)

Format: pr

Output a markdown block formatted as a PR description:

## Summary

{23 sentence summary of what this change does and why}

## Changes

{Bulleted list of implementation steps, one line each}

## Test plan

{Bulleted checklist from test strategy, formatted as - [ ] items}

## Risks

{Risks from plan, abbreviated to 1 line each}

---
*Generated by trekplan from {plan filename}*

Format: issue

Output a markdown block formatted as an issue comment:

## Implementation plan summary

**Task:** {task description}
**Plan file:** {plan path}
**Scope:** {N files, complexity}

### Proposed approach
{35 bullet points from key implementation steps}

### Open questions / risks
{Top 23 risks from plan}

---
*Generated by trekplan*

Format: markdown

Output the plan content with internal metadata stripped:

  • Remove the "Revisions" section
  • Remove plan-critic and scope-guardian scores/verdicts
  • Remove [ASSUMPTION] markers (but keep the surrounding sentence)
  • Keep everything else verbatim

Format: headless

This is a shortcut for --decompose. It runs the full session decomposition pipeline and is equivalent to --decompose {plan-path}. Proceed to Phase 1.6 (Decompose) below.


After outputting the formatted block (for pr/issue/markdown), say:

Export complete ({format}). Copy the block above.

Then stop. Do not continue to any subsequent phase.

Phase 1.6 — Decompose (runs only when mode = decompose or export headless)

Skip this phase entirely unless mode = decompose or export format = headless.

Read the plan file. Verify it contains an Implementation Plan section with numbered steps. If no steps are found, report and stop:

Error: plan has no implementation steps. Run /trekplan first to generate a plan.

Determine the output directory from the plan slug:

  • Extract the slug from the plan filename (e.g., trekplan-2026-04-06-auth-refactorauth-refactor)
  • Output directory: .claude/trekplan-sessions/{slug}/

Launch the session-decomposer agent:

Plan file: {plan path}
Plugin root: ${CLAUDE_PLUGIN_ROOT}
Output directory: .claude/trekplan-sessions/{slug}/

The session-decomposer will:

  1. Parse the plan's steps and their file dependencies
  2. Build a dependency graph between steps
  3. Group steps into sessions of 35 steps each
  4. Identify which sessions can run in parallel (waves)
  5. Generate one session spec file per session
  6. Generate a dependency diagram (mermaid)
  7. Generate a launch script (launch.sh)

When the session-decomposer completes, present the summary to the user:

## Decomposition Complete

**Master plan:** {plan path}
**Sessions:** {N} across {W} waves
**Output:** .claude/trekplan-sessions/{slug}/

### Sessions

| # | Title | Steps | Wave | Parallel |
|---|-------|-------|------|----------|
{session table from decomposer}

### Files generated

- Session specs: .claude/trekplan-sessions/{slug}/session-*.md
- Dependency graph: .claude/trekplan-sessions/{slug}/dependency-graph.md
- Launch script: .claude/trekplan-sessions/{slug}/launch.sh

You can:
- Review individual session specs before running
- Run all sessions: `bash .claude/trekplan-sessions/{slug}/launch.sh`
- Run a single session: `claude -p "$(cat .claude/trekplan-sessions/{slug}/session-1-*.md)"`
- Say **"launch"** to start headless execution from here

If the user says "launch": run the launch script via Bash.

Then stop. Do not continue to any subsequent phase.

Phase 2 — (removed in v2.0)

The interview phase has moved to /trekbrief. This command no longer asks the user any requirements questions — the brief is the authoritative input.

Phase 3 — Destination and context recap (foreground)

Determine the plan destination path:

  • If project_dir is set (from --project or the brief's project_dir frontmatter field): plan destination = {project_dir}/plan.md.
  • Otherwise: derive slug and date — if the brief has frontmatter slug and created, use them; otherwise extract from the brief filename. Destination: .claude/plans/trekplan-{YYYY-MM-DD}-{slug}.md.

Collect all research briefs (from --research flag and auto-discovered {project_dir}/research/*.md).

Report to the user:

Planning pipeline running in foreground.

  Brief:   {brief_path}
  Project: {project_dir or "-"}
  Plan:    {plan destination}
  Research briefs: {N}
  Architecture note: {present | none}

Then continue to the next phase inline.

Why foreground? As of v2.4.0 the planning-orchestrator is no longer spawned as a background agent. The Claude Code harness does not expose the Agent tool to sub-agents, so an orchestrator launched with run_in_background: true cannot spawn the documented exploration swarm (architecture-mapper, task-finder, plan-critic, etc.) and silently degrades to single-context reasoning. Running the phases inline in main context keeps the swarm intact. Use claude -p in a separate terminal window for long-running headless work.


All remaining phases run inline in the main command context.


Phase 4 — Codebase sizing

Determine codebase scale to calibrate agent turns (not agent count).

Run via Bash:

find . -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.py" -o -name "*.go" -o -name "*.rs" -o -name "*.java" -o -name "*.rb" -o -name "*.c" -o -name "*.cpp" -o -name "*.h" -o -name "*.cs" -o -name "*.swift" -o -name "*.kt" -o -name "*.sh" -o -name "*.md" \) -not -path "*/node_modules/*" -not -path "*/.git/*" -not -path "*/vendor/*" -not -path "*/dist/*" -not -path "*/build/*" | wc -l

Classify:

  • Small (< 50 files)
  • Medium (50500 files)
  • Large (> 500 files)

Report:

Codebase: {N} source files ({scale}). Deploying exploration agents.

Phase 4b — Brief review

Launch the brief-reviewer agent: Prompt: "Review this task brief for quality: {brief_path}. Check completeness, consistency, testability, scope clarity, and research-plan validity."

Handle the verdict:

  • PROCEED — continue to Phase 5.
  • PROCEED_WITH_RISKS — continue, carry flagged risks as [ASSUMPTION] in the plan.
  • REVISE — present findings and ask the user for clarification (foreground is the only mode). If the user force-stops, carry outstanding findings as [ASSUMPTION] entries.

Phase 5 — Parallel exploration (specialized agents + research)

If mode = quick: Do NOT launch any exploration agents. Instead, run a lightweight file check:

  • Glob for files matching key terms from the brief's task/intent (up to 3 patterns)
  • Grep for function/type definitions matching key terms (up to 3 patterns)

Report findings as:

Quick scan: {N} potentially relevant files found via Glob/Grep.
No agent swarm — proceeding directly to planning.

Then skip Phase 6 (deep-dives) and proceed to Phase 7 (Synthesis) with only the quick-scan results.


All other modes: Launch exploration agents in parallel (all in a single message). Use the specialized agents from the agents/ directory.

All agents run for all codebase sizes. Scale maxTurns by size (small: halved, medium: default, large: default) instead of dropping agents.

Agent Small Medium Large Purpose
architecture-mapper Yes Yes Yes Codebase structure, patterns, anti-patterns
dependency-tracer Yes Yes Yes Module connections, data flow, side effects
risk-assessor Yes Yes Yes Risks, edge cases, failure modes
task-finder Yes Yes Yes Task-relevant files, functions, types, reuse candidates
test-strategist Yes Yes Yes Test patterns, coverage gaps, strategy
git-historian Yes Yes Yes Recent changes, ownership, hot files, active branches
research-scout Conditional Conditional Conditional External docs (only when unfamiliar tech detected AND no research brief covers it)
convention-scanner No Yes Yes Coding conventions, naming, style, test patterns

Always launch (all codebase sizes):

architecture-mapper — full codebase structure, tech stack, patterns, anti-patterns. Prompt: "Analyze the architecture of this codebase. The task being planned is: {task}"

dependency-tracer — module connections, data flow, side effects for task-relevant code. Prompt: "Trace dependencies and data flow relevant to this task: {task}. Focus on modules that will be affected by the implementation."

risk-assessor — risks, edge cases, failure modes, technical debt near task area. Prompt: "Assess risks and failure modes for implementing this task: {task}. Check for complexity hotspots, security boundaries, and technical debt in the relevant code."

task-finder — all files, functions, types, and interfaces directly related to the task. Prompt: "Find all code relevant to this task: {task}. Include existing implementations that solve similar problems, API boundaries, database models, configuration files. Report file paths and line numbers for every finding."

test-strategist — existing test patterns, coverage gaps, test strategy. Prompt: "Analyze the test infrastructure and design a test strategy for this task: {task}. Discover existing patterns and identify coverage gaps."

git-historian — recent changes, code ownership, hot files, active branches. Prompt: "Analyze git history relevant to this task: {task}. Report recent changes, ownership, hot files, and active branches that may affect planning."

Launch for medium+ codebases (50+ files):

Convention Scanner — use the convention-scanner plugin agent (model: "sonnet") for medium+ codebases only. Provide concrete examples from the codebase, not generic advice."

Conditional: External research

After reading the brief, determine if the task involves technologies, APIs, or libraries that are:

  • Not clearly present in the codebase
  • Being upgraded to a new major version
  • Being used in an unfamiliar way

Skip research-scout for any topic already answered by an attached research brief. If the brief's research_status == complete and all Research Plan topics have corresponding research files, skip research-scout entirely.

If yes (and not covered by attached briefs): launch research-scout in parallel with the other agents. Prompt: "Research the following technologies for this task: {task}. Specific questions: {list specific questions about the technology}. Technologies to research: {list}."

If no external technology is involved or all topics are covered by briefs: skip research-scout and note: "No external research needed — covered by research briefs / well-represented in codebase."

Phase 6 — Targeted deep-dives

After all Phase 5 agents complete, review their results and identify knowledge gaps — areas where exploration was too shallow to plan confidently.

Common reasons for deep-dives:

  • A critical function was found but its implementation details are unclear
  • A dependency chain needs tracing to understand side effects
  • A test pattern was identified but the test infrastructure needs more detail
  • A risk was flagged but the actual impact needs verification

For each significant gap, spawn a targeted deep-dive agent (model: "sonnet", subagent_type: "Explore") with a narrow, specific brief.

Launch up to 3 deep-dive agents in parallel. If no gaps exist, skip this phase and note: "Initial exploration was sufficient — no deep-dives needed."

Phase 7 — Synthesis

After all agents complete (initial + deep-dives + research), synthesize:

  1. Read all agent results carefully
  2. Identify overlaps and contradictions between agents
  3. Build a mental model of the codebase architecture
  4. Catalog reusable code: existing functions, utilities, patterns
  5. Integrate research findings with codebase analysis
  6. Note remaining gaps — things you cannot determine from code or research (these become assumptions in the plan, marked explicitly)
  7. For each finding, track whether it came from codebase analysis or external research — the plan must distinguish these sources

Do NOT write this synthesis to disk. It is internal working context only.

Phase 8 — Deep planning

Schema-drift defense (sealed inline so this contract survives even when agents/planning-orchestrator.md is not implicitly loaded by Opus 4.7).

The plan you write MUST satisfy these regexes. The executor parses with strict regex matching; any deviation breaks parsing and forces a re-plan.

STEP_HEADING_REGEX     = /^### Step (\d+):\s+(.+?)\s*$/m
FORBIDDEN_HEADING_REGEX = /^(?:##|###) (?:Fase|Phase|Stage|Steg) \d+/m

FORBIDDEN headings (parser rejects these — do not emit them under Implementation Plan):

  • ## Fase 1, ### Fase 1 — Norwegian narrative format
  • ## Phase 1, ### Phase 1 — narrative phase format
  • ## Stage 1, ### Stage 1 — narrative stage format
  • ## Steg 1, ### Steg 1 — Norwegian step word
  • ### 1. or ### 1) — numbered without "Step"
  • ### Step 1 — (em-dash instead of colon)
  • Any heading that doesn't match STEP_HEADING_REGEX

REQUIRED step shape — copy this canonical example verbatim, substituting file paths, descriptions, and patterns. Preserve the exact heading format, bullet field names, and Manifest YAML structure. Do not invent new field names. Do not skip fields. Do not nest steps under sub-headings.

### Step 1: Add JWT verification middleware

- **Files:** `src/middleware/jwt.ts`
- **Changes:** Create new middleware function `verifyJWT(req, res, next)` that reads `Authorization: Bearer <token>` header, verifies signature with `process.env.JWT_SECRET`, attaches decoded payload to `req.user`, and returns 401 on invalid/missing token. (new file)
- **Reuses:** `jsonwebtoken.verify()` (already in package.json), pattern from `src/middleware/cors.ts`
- **Test first:**
  - File: `src/middleware/jwt.test.ts` (new)
  - Verifies: valid token attaches user; invalid token returns 401; missing header returns 401
  - Pattern: `src/middleware/cors.test.ts` (follow this style)
- **Verify:** `npm test -- jwt.test.ts` → expected: `3 passing`
- **On failure:** revert — `git checkout -- src/middleware/jwt.ts src/middleware/jwt.test.ts`
- **Checkpoint:** `git commit -m "feat(auth): add JWT verification middleware"`
- **Manifest:**
  ```yaml
  manifest:
    expected_paths:
      - src/middleware/jwt.ts
      - src/middleware/jwt.test.ts
    min_file_count: 2
    commit_message_pattern: "^feat\\(auth\\): add JWT verification middleware$"
    bash_syntax_check: []
    forbidden_paths:
      - src/middleware/cors.ts
    must_contain:
      - path: src/middleware/jwt.ts
        pattern: "verifyJWT"
  ```

Validator self-check (mandatory after writing plan.md): run node ${CLAUDE_PLUGIN_ROOT}/lib/validators/plan-validator.mjs --strict --json {plan_path} and re-revise the plan if it fails. The validator is the source of truth for heading shape, manifest presence, and required-field coverage. If ${CLAUDE_PLUGIN_ROOT} is unset (rare in practice), fall back to the equivalent path under your validators cache or the repo's lib/validators/.

Read the brief file (from --brief or --project). Read the plan template: @${CLAUDE_PLUGIN_ROOT}/templates/plan-template.md

Write the plan following the template structure. The plan MUST include:

Required sections

  1. Context — Why this change is needed. Use the brief's Intent verbatim or tightly paraphrased. The plan's motivation must trace directly to the brief.
  2. Codebase Analysis — Tech stack, patterns, relevant files, reusable code, external tech researched. Every file path must be real (verified during exploration).
  3. Research Sources — If any research briefs or research-scout was used: table of technologies, sources, findings, and confidence levels. Omit if none.
  4. Implementation Plan — Ordered steps. Each step specifies:
    • Exact files to modify or create (with paths)
    • What changes to make and why
    • Which existing code to reuse
    • Dependencies on other steps
    • Whether the step is based on codebase analysis or external research
    • On failure: — recovery action (revert/retry/skip/escalate)
    • Checkpoint: — git commit command after success
  5. Execution Strategy — For plans with > 5 steps: group steps into sessions (35 steps each), organize sessions into waves (parallel where independent), specify scope fences per session. Omit for plans with ≤ 5 steps.
  6. Alternatives Considered — At least one alternative approach with pros/cons and reason for rejection.
  7. Risks and Mitigations — From the risk-assessor findings and the brief's open questions. What could go wrong and how to handle it.
  8. Test Strategy — From the test-strategist findings (if available). What tests to write and which patterns to follow.
  9. Verification — Reuse the brief's Success Criteria as the baseline. Each criterion must be an executable command or observable condition.
  10. Estimated Scope — File counts and complexity rating.

Quality standards

  • Every file path in the plan must exist in the codebase (or be explicitly marked as "new file to create")
  • Every "reuses" reference must point to a real function/pattern found during exploration
  • Steps must be ordered by dependency (not by file path or importance)
  • Verification criteria must be concrete and executable
  • The plan must be implementable by someone who has not seen the exploration results — it must stand on its own
  • Research-based decisions must cite their source
  • Every implementation decision must be traceable to a brief section (Intent, Goal, Constraint, Preference, NFR, or Success Criterion)

Write the plan

Use the plan destination computed in Phase 3:

  • --project mode: {project_dir}/plan.md
  • --brief mode: .claude/plans/trekplan-{YYYY-MM-DD}-{slug}.md

Create the parent directory if it does not exist.

Phase 9 — Adversarial review

Launch two review agents in parallel — emit both Agent tool calls in a single assistant message turn (same pattern as Phase 5 exploration). They have zero data dependencies; serializing them wastes 3060 seconds per run.

plan-critic — adversarial review of the plan. Prompt: "Review this implementation plan for the task: {task}. Plan file: {plan path}. Read it and find every problem — missing steps, wrong ordering, fragile assumptions, missing error handling, scope creep, underspecified steps. Rate each finding as blocker, major, or minor. Write the structured JSON output to /tmp/plan-critic-out.json so the dedup helper can merge with scope-guardian's findings."

scope-guardian — scope alignment check. Prompt: "Check this implementation plan against the brief. Task: {task}. Brief file: {brief_path}. Plan file: {plan path}. Find scope creep (plan does more than the brief requires) and scope gaps (plan misses brief requirements). Check that referenced files and functions exist. Verify that every Success Criterion in the brief is covered by the plan's Verification section. Write structured JSON output to /tmp/scope-guardian-out.json."

After both complete, run an inline dedup pass:

node ${CLAUDE_PLUGIN_ROOT}/lib/review/plan-review-dedup.mjs \
  --plan-critic /tmp/plan-critic-out.json \
  --scope-guardian /tmp/scope-guardian-out.json \
  > /tmp/plan-review-merged.json

The merged array attributes each finding to [plan-critic, scope-guardian] when both reviewers raised the same issue (exact match on file:line:rule_key, or Jaccard ≥ 0.7 on text tokens). Revise the plan once for the merged set, not twice for the duplicates. Source: research/05 R1 + R2.

After both complete:

  • If blockers are found: revise the plan to address them. Add a "Revisions" note at the bottom of the plan listing what changed and why.
  • If only major issues: revise to address them. Add revisions note.
  • If only minor issues or clean: proceed without changes. Note the review result in the plan.

Phase 10 — Present and refine

Present a summary to the user:

## Voyage Complete

**Task:** {task description}
**Mode:** {foreground | quick}
**Brief:** {brief_path}
**Project:** {project_dir or "-"}
**Plan:** {plan_path}
**Exploration:** {N} agents deployed ({N} specialized + {N} deep-dives + {research status})
**Scope:** {N} files to modify, {N} to create — {complexity}

### Key decisions
- {Decision 1 and rationale}
- {Decision 2 and rationale}

### Implementation steps ({N} total)
1. {Step 1 summary}
2. {Step 2 summary}
...

### Research findings
{Summary of external research + attached research briefs, or "No external research used."}

### Adversarial review
**Plan critic:** {Summary — blockers/majors/minors found, how addressed}
**Scope guardian:** {Summary — creep/gaps found, how addressed}

You can:
- Ask questions or request changes to refine the plan
- Say **"execute"** to start implementing
- Say **"execute with team"** to implement with parallel Agent Team (if eligible)
- Say **"save"** to keep the plan for later

If the user asks questions or requests changes:

  • Update the plan file in-place
  • Show what changed
  • Re-present the summary

Print the annotation invocation

After the plan summary, print this block verbatim (substituting only {plan_path} with the absolute path). The /playground command must appear literally — operators copy-paste it directly into Claude. It points at the official claude-plugins-official playground skill, which loads its document-critique template, reads plan.md, generates per-line suggestions, and writes a single self-contained HTML file that opens in the browser. The HTML has the plan on the left (nicely formatted, line-numbered), suggestion cards on the right (Approve / Reject / Comment), and a "Copy Prompt" button at the bottom that gathers everything marked into one prompt. Paste that prompt back into Claude — Claude then revises plan.md freehand from the notes.

────────────────────────────────────────────────────────────────────
To review and annotate this plan, copy and paste this into Claude:

    /playground build a document-critique playground for {plan_path}

That builds a self-contained HTML file with the plan on the left,
per-line approve/reject/comment annotations on the right, and a
"Copy Prompt" button at the bottom. Copy the generated prompt, paste
it back here, and Claude revises plan.md from your notes.
────────────────────────────────────────────────────────────────────

Phase 11 — Handoff

"save" / "later" / "done"

Confirm the plan and brief file locations and exit.

"execute" / "go" / "start"

Begin implementing the plan step by step in this session. Follow the plan exactly. Mark each step complete as you go.

"execute with team" / "team"

Before creating a team, verify eligibility:

  1. Count implementation steps that are independent (no dependency on each other) AND touch different files/modules
  2. If fewer than 3 independent steps: inform the user and fall back to sequential execution. "The plan has fewer than 3 independent steps — sequential execution is more efficient."

If eligible:

  1. Present the proposed team split: which steps go to which team member
  2. Ask for confirmation: "Create Agent Team with {N} members? (yes/no)"
  3. If confirmed: create the team with TeamCreate, assign step clusters to each member. Use isolation: "worktree" on each team member agent so they work in isolated git worktrees — this prevents file conflicts during parallel implementation. Coordinate execution and clean up with TeamDelete when done.
  4. If TeamCreate fails (tool not available): fall back to sequential execution and notify the user

Phase 12 — Session tracking

After the plan is presented (Phase 10) or after handoff (Phase 11), write a session record to ${CLAUDE_PLUGIN_DATA}/trekplan-stats.jsonl (create the file if it does not exist).

Record format (one JSON line):

{
  "ts": "{ISO-8601 timestamp}",
  "task": "{task description (first 100 chars)}",
  "mode": "{default|fg|quick}",
  "slug": "{plan slug}",
  "brief_path": "{brief_path}",
  "project_dir": "{project_dir or null}",
  "codebase_size": "{small|medium|large}",
  "codebase_files": {N},
  "agents_deployed": {N},
  "deep_dives": {N},
  "research_briefs_used": {N},
  "research_scout_used": {true|false},
  "critic_verdict": "{BLOCK|REVISE|PASS}",
  "guardian_verdict": "{ALIGNED|CREEP|GAP|MIXED}",
  "outcome": "{execute|execute_team|save|refine}"
}

If ${CLAUDE_PLUGIN_DATA} is not set or not writable, skip tracking silently. Never let tracking failures block the main workflow.

Profile (v4.1)

Accepts --profile <name> where <name> is economy, balanced, premium, or a custom profile under voyage-profiles/. Default: premium.

Resolution order (per lib/profiles/resolver.mjs):

  1. --profile flag (source: flag)
  2. VOYAGE_PROFILE env-var (source: env)
  3. premium default (source: default)

The selected profile drives phase_models.plan (model used for the planning LLM), parallel_agents_min/max (the exploration swarm size), and is recorded in plan.md frontmatter so /trekexecute and /trekcontinue can inherit it across the pipeline.

Examples:

/trekplan --profile economy --project .claude/projects/2026-05-09-add-auth
/trekplan --profile balanced --brief brief.md
VOYAGE_PROFILE=balanced /trekplan --project ...

Stats records emit profile, phase_models, parallel_agents, and profile_source so operators can audit which profile drove which session.

Hard rules

  • Brief-driven: Every plan decision must trace back to a section of the brief (Intent, Goal, Constraint, Preference, NFR, Success Criterion). If a step has no brief basis, it is scope creep — flag it or remove it.
  • No interview: Never ask the user requirements questions. If the brief is inadequate, stop and ask the user to run /trekbrief again.
  • Scope: Only explore the current working directory and its subdirectories. Never read files outside the repo (no ~/.env, no credentials, no other repos).
  • Cost: Sonnet for all agents (exploration, deep-dives, research, critics). Opus only runs in the main thread for synthesis and planning.
  • Privacy: Never log, store, or repeat file contents that look like secrets, tokens, or credentials. Never log prompt text.
  • No premature execution: Do not modify any project files until the user explicitly approves the plan.
  • Plan stands alone: The plan file must be understandable without access to the exploration results. Include all necessary context.
  • Honesty: If exploration reveals the task is trivial (single file, obvious change), say so. Do not inflate the plan to justify the process. Suggest the user just implements it directly.
  • Adaptive: Never spawn more agents than the codebase warrants. A 10-file project does not need 7 exploration agents. Scale down.
  • Research transparency: Always distinguish codebase-derived decisions from research-derived decisions in the plan.