ktg-plugin-marketplace/plugins/voyage/commands/trekbrief.md
Kjell Tore Guttormsen 916d30f63e chore(voyage): release v5.0.0 — remove bespoke playground + /trekrevise + Handover 8; render produced artifacts to HTML + link, annotate via /playground
The v4.2/v4.3 bespoke playground SPA (~388 KB), the /trekrevise command,
Handover 8 (annotation → revision), the supporting lib/ modules
(anchor-parser, annotation-digest, markdown-write, revision-guard), the
Playwright e2e suite, and the @playwright/test / @axe-core/playwright
devDeps are removed. A browser walkthrough found the playground borderline
unusable, and it duplicated the official /playground plugin's
document-critique / diff-review templates.

In their place: scripts/render-artifact.mjs — a small, zero-dependency
renderer that turns a brief/plan/review .md into a self-contained,
design-system-styled, zero-network .html (frontmatter folded into a
<details> block). /trekbrief, /trekplan, and /trekreview call it on their
last step and print the file:// link; to annotate, run /playground
(document-critique) on the .md and paste the generated prompt back.

Resolves the v4.3.1-deferred findings as moot (their target files are
deleted). npm test green: 509 tests, 507 pass, 0 fail, 2 skipped.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-12 14:05:07 +02:00

28 KiB
Raw Blame History

name description argument-hint model allowed-tools
trekbrief Interactive interview that produces a task brief with explicit research plan. Feeds /trekresearch and /trekplan. Optionally orchestrates the full pipeline end-to-end. [--quick] <task description> opus Agent, Read, Glob, Grep, Write, Edit, Bash, AskUserQuestion

Ultrabrief Local v2.1

Interactive requirements-gathering command. Produces a task brief — a structured markdown file that declares intent, goal, constraints, and an explicit research plan with copy-paste-ready /trekresearch commands.

Pipeline position:

/trekbrief  →  brief.md                         (this command)
/trekresearch --project <dir>  →  research/*.md
/trekplan --project <dir>  →  plan.md
/trekexecute --project <dir>  →  execution

The brief is the contract between the user's intent and /trekplan. Every decision the plan makes must trace back to content in the brief.

This command is always interactive. There is no background mode — the interview requires user input. After the brief is written, the command optionally orchestrates the rest of the pipeline (research + plan) in foreground if the user opts in.

Phase 1 — Parse mode and validate input

Parse $ARGUMENTS:

  1. If arguments start with --quick: set mode = quick. The interview starts more compactly (fewer opening probes per section) but still escalates automatically if quality gates fail. There is no hard cap on question count — quality drives the loop, not a counter. Strip the flag; remainder is the task description.

  2. Otherwise: mode = default. Interview probes each section until the completeness gate (Phase 3) and brief-review gate (Phase 4) both pass.

  3. --gates flag (autonomy control, may combine with any mode): when present, set gates_mode = true. This re-enables approval pauses at every phase boundary in the downstream pipeline (research, plan, execute) and at every wave in the executor. Default gates_mode = false means auto mode runs continuously until the main-merge gate (which is the one boundary that ALWAYS pauses, regardless of gates_mode). Strip the flag from $ARGUMENTS before further parsing. The flag is consumed by the autonomy-gate state machine via the CLI shim: node ${CLAUDE_PLUGIN_ROOT}/lib/util/autonomy-gate.mjs --state X --event Y --gates {true|false}.

If no task description is provided, output usage and stop:

Usage: /trekbrief <task description>
       /trekbrief --quick <task description>

Modes:
  default       Dynamic interview until quality gates pass — brief with research plan
  --quick       Compact start; still escalates on weak sections — brief with research plan

Examples:
  /trekbrief Add user authentication with JWT tokens
  /trekbrief --quick Add rate limiting to the API
  /trekbrief Migrate from Express to Fastify

Report:

Mode: {default | quick}
Task: {task description}

Phase 2 — Generate slug and create project directory

Generate a slug from the task description: first 3-4 meaningful words, lowercase, hyphens. Example: "Migrate from Express to Fastify" → fastify-migration.

Set today's date as YYYY-MM-DD (UTC).

Create the project directory:

PROJECT_DIR=".claude/projects/{YYYY-MM-DD}-{slug}"
mkdir -p "$PROJECT_DIR/research"

Report:

Project directory: .claude/projects/{YYYY-MM-DD}-{slug}/

If the directory already exists and is non-empty, warn and ask:

"Directory {path} exists. Overwrite, reuse (keep existing files), or pick new slug?"

Use AskUserQuestion with three options. If "pick new slug", ask for a new slug and restart Phase 2.

Phase 3 — Completeness loop

Phase 3 is a section-driven completeness loop. Instead of a numbered question list, maintain an internal state of brief sections and keep asking until every required section has substantive content. Quality drives the loop — there is no hard cap on question count.

Use AskUserQuestion for every question. Ask one question at a time. Never dump multiple questions.

Internal state

Track this structure in memory as the loop runs:

state = {
  intent:           { content: "", probes: 0 },           # required
  goal:             { content: "", probes: 0 },           # required
  success_criteria: { content: [], probes: 0 },           # required
  research_plan:    { topics: [], probes: 0 },            # required
  non_goals:        { content: [], probes: 0 },           # optional
  constraints:      { content: [], probes: 0 },           # optional
  preferences:      { content: [], probes: 0 },           # optional
  nfrs:             { content: [], probes: 0 },           # optional
  prior_attempts:   { content: "", probes: 0 },           # optional
  question_history: []                                    # list of questions asked
}

content is raw user answers merged; probes is how many times this section has been asked; question_history prevents re-asking the same variant twice.

Required sections (initial-signal gate)

Four sections MUST have substantive content before exiting Phase 3:

  1. Intent — full sentence or paragraph (not a single word or phrase)
  2. Goal — full sentence or paragraph
  3. Success Criteria — at least one concrete, testable item
  4. Research Plan — either ≥ 1 topic probed, OR the user has explicitly confirmed "no external research needed"

"Substantive" means: non-empty, not a trivial one-word reply, not "I don't know" without a recorded assumption. The strict falsifiability check happens in Phase 4 (brief-review gate); Phase 3 is just the initial-signal bar.

Optional sections (Non-Goals, Constraints, Preferences, NFRs, Prior Attempts) do not gate exit. If they remain empty after the required sections pass, they will be recorded as "Not discussed — no constraints assumed" in Phase 4's draft.

Question bank (per section)

Pick the next question from the section's bank based on content and probes. Wording must stay conversational — only the selection is section-driven, not the tone.

Intent (required):

  • Anchor (probes=0, content empty): "Why are we doing this? What is the motivation, the user need, or the strategic context behind the task?"
  • Follow-up (probes≥1, content present but shallow): "What happens if we do nothing? Who is affected?"
  • Sharpen (user mentioned a symptom): "You mentioned {X}. Is {X} the symptom or the underlying cause?"

Goal (required):

  • Anchor: "Describe the end state in 13 sentences — specific enough to disagree with."
  • Follow-up: "How would you recognize this is done when looking at the UI / API / codebase?"

Success Criteria (required):

  • Anchor: "How do we verify it is actually done? List 24 concrete, testable conditions — commands to run, observations, or metrics."
  • Sharpen (criterion is vague): "'{quoted criterion}' is subjective. Which command, observation, or metric would prove this is met?"
  • Quantify (performance/quality claim): "You mentioned it should be {fast/reliable/secure}. What number or threshold counts as success?"

Research Plan (required, strictest):

  • Anchor (no topics yet): "Are there technologies, libraries, or decisions in this task you do not have solid current knowledge of? Examples might be library choice, a protocol, or a security pattern."
  • Per-topic sharpen (topic exists but incomplete): "For topic '{title}': which parts of the plan depend on the answer? What confidence level do you need — high, medium, or low?"
  • Scope question: "Is '{topic}' answerable from the existing codebase, from external docs, or both?"
  • Confirm none (user refuses all topics): "Confirming: no external research needed — you already know everything the plan will depend on?"

Non-Goals (optional):

  • Anchor: "What is explicitly NOT in scope? This prevents scope-guardian from flagging gaps for things we deliberately don't do."

Constraints (optional):

  • Anchor: "Technical, time, or resource constraints the plan must respect? Dependencies, compatibility, deadlines, or budget."
  • Sharpen: "You mentioned {deadline / budget / compatibility}. Is it hard or guidance?"

Preferences (optional):

  • Anchor: "Preferences for libraries, patterns, or architectural style?"

NFRs (optional):

  • Anchor: "Performance, security, accessibility, or scalability targets? Quantified wherever possible."

Prior Attempts (optional):

  • Anchor: "Has this been attempted before? What worked or failed?"

Selection rule

On each loop iteration:

  1. Compute the next section to probe:
    • If any required section is below the initial-signal gate → pick the weakest required section in this priority order: Intent → Goal → Success Criteria → Research Plan.
    • Else if an optional section is clearly missing and likely material to scope (heuristic: the task description hints at constraints or NFRs) → probe it at most once.
    • Else: exit Phase 3.
  2. Within the chosen section, pick the question variant:
    • If probes == 0 and content is empty → Anchor.
    • If content exists but is shallow → Follow-up or Sharpen.
    • If the section is Research Plan and topics exist → iterate per-topic sharpen across incomplete topics.
  3. Ensure the exact question is NOT already in question_history. If it is, pick the next variant or skip to the next weakest section.
  4. Ask via AskUserQuestion. Append question to history. Increment probes.
  5. Record the answer into content. Never overwrite — merge.

Research topic identification

As the user answers Intent, Goal, or Success Criteria, listen for:

  • Unfamiliar technologies — libraries, frameworks, protocols not clearly present in the codebase
  • Version upgrades — migrating to a new major version
  • Security-sensitive decisions — auth, crypto, data handling
  • Architectural choices — pattern X vs Y, library A vs B
  • Unknown integrations — third-party APIs, external services
  • Compliance / legal — GDPR, accessibility, industry regulations

When you hear one, add a candidate topic to research_plan.topics with only a title and why-it-matters. Probe it on the next Research Plan iteration using the per-topic sharpen question to fill in:

  • Research question (must end in ?)
  • Required for plan steps
  • Scope (local / external / both)
  • Confidence needed (high / medium / low)
  • Estimated cost (quick / standard / deep)

If the user says "I know this" to a candidate topic, remove it from the list. Trust the user. If no topics emerge after probing, the user confirms "no external research needed" → research_plan gate passes with 0 topics.

Quick mode adjustments

If mode = quick:

  • For optional sections, cap probes at 1 each. Do not revisit optional sections during Phase 3.
  • Required sections still have no probe cap — quality gate still applies.
  • Prefer Anchor variants over Sharpen on the first pass.

Force-stop path

If the user says "skip", "stop asking", "just proceed", "enough", or similar, break the loop immediately:

  • Mark any required sections still below the initial-signal gate as { incomplete_forced_stop: true } in state.
  • Proceed to Phase 4 with a note that the brief will carry a reduced confidence flag.

Exit condition

Exit Phase 3 when:

  • All four required sections meet the initial-signal gate, OR
  • The user has force-stopped.

Report:

Phase 3 complete: {N} questions asked across {M} sections.
Proceeding to draft and review.

Phase 4 — Draft, review, and revise

Phase 4 runs a draft → brief-reviewer → revise loop. The draft is not written to disk until the brief-review quality gate passes (or the iteration cap is hit). This ensures the brief that reaches /trekplan has already survived a critical review.

Read the brief template first: @${CLAUDE_PLUGIN_ROOT}/templates/trekbrief-template.md

Loop bound

Maximum 3 review iterations. This bounds cost in the worst case while leaving room for two rounds of targeted follow-ups.

Iteration step-by-step

Step 4a — Draft in memory

Build the brief text from Phase 3 state by filling the template:

  • Frontmatter: populate task, slug, project_dir, research_topics (count of topics), research_status: pending, auto_research: false (will update in Phase 5 if user opts in), interview_turns (total questions asked across Phase 3 + Phase 4), source: interview.
  • Intent: expand the user's motivation into 35 sentences. Load-bearing.
  • Goal: concrete end state.
  • Non-Goals: from state, or "- None explicitly stated" bullet if empty.
  • Constraints / Preferences / NFRs: from state, or "Not discussed — no constraints assumed" note if empty.
  • Success Criteria: falsifiable commands/observations from state.
  • Research Plan: one ### Topic N: {title} section per topic with the full structure from the template. If 0 topics: write the "No external research needed — user confirmed solid knowledge of all plan dependencies" note.
  • Open Questions / Assumptions: from any "I don't know" answers recorded during Phase 3, plus implicit gaps.
  • Prior Attempts: from state, or "None — fresh task."

Step 4b — Write draft to disk

Write the draft to {PROJECT_DIR}/brief.md.draft (not brief.md — the final file is only written after the gate passes).

Step 4c — Launch brief-reviewer

Launch the brief-reviewer agent (foreground, blocking) with the prompt:

"Review this task brief for quality: {PROJECT_DIR}/brief.md.draft. Check completeness, consistency, testability, scope clarity, and research-plan validity. Report findings, verdict, and the required machine-readable JSON block."

Step 4d — Parse JSON scores

Parse the agent's output. Locate the last fenced json block. Extract per-dimension scores:

review = {
  completeness:   { score, gaps },
  consistency:    { score, issues },
  testability:    { score, weak_criteria },
  scope_clarity:  { score, unclear_sections },
  research_plan:  { score, invalid_topics },
  verdict:        "PROCEED | PROCEED_WITH_RISKS | REVISE"
}

JSON fallback: if the JSON block is missing, invalid, or a dimension is missing, treat all dimensions as score: 3 and set the verdict from the prose verdict if present, otherwise PROCEED_WITH_RISKS. Emit an internal note that the reviewer output was degraded. This ensures the loop never deadlocks on a parser error.

Step 4e — Gate evaluation

The gate passes when all of the following are true:

  • completeness.score ≥ 4
  • consistency.score ≥ 4
  • testability.score ≥ 4
  • scope_clarity.score ≥ 4
  • research_plan.score == 5

(Research Plan requires a perfect score because its format is checked mechanically: ends in ?, Required for plan steps filled, scope is one of local | external | both, confidence is high | medium | low. Anything less means at least one topic is malformed and planning will stumble.)

If gate passes:

  1. Move brief.md.draftbrief.md (atomic rename).
  2. Delete the draft file if rename is not possible on the OS; write brief.md fresh.
  3. Break the loop and proceed to Step 4g.

If gate fails AND iteration count < 3:

  1. Identify the weakest dimension (lowest score; tie broken by priority: research_plan > testability > completeness > consistency > scope_clarity).
  2. Generate a targeted follow-up question from the dimension's detail field (gaps / issues / weak_criteria / unclear_sections / invalid_topics). Example generators:
    • completeness.gaps: ["Non-Goals empty, unclear if deliberate"] → "You did not specify anything out-of-scope. Is that deliberate, or are there things we should explicitly exclude?"
    • testability.weak_criteria: ["'system should be fast'"] → "'System should be fast' is not falsifiable. Which metric or threshold proves this is met — e.g., p95 < 200ms, or throughput ≥ X requests/sec?"
    • research_plan.invalid_topics: [{"topic":"JWT","issue":"Required for plan steps empty"}] → "For research topic 'JWT': which plan steps depend on the answer? Give one or two concrete kinds of step (e.g., 'library selection', 'threat model', 'migration strategy')."
  3. Ask via AskUserQuestion. Record the answer into Phase 3 state.
  4. Return to Step 4a with incremented iteration count. The reviewer sees an updated draft, so you MUST re-read the brief and regenerate the review each iteration — do not reuse stale scores.
  5. When launching the reviewer on iteration 2 or 3, include prior questions in the prompt so it does not produce circular follow-ups:

    "Questions already asked during this interview: {list from question_history}. Focus on issues that remain after those answers — do not re-raise gaps that have already been addressed."

If gate fails AND iteration count == 3 (loop exhausted):

  1. Move brief.md.draftbrief.md.
  2. Add brief_quality: partial to the frontmatter (edit the file post-rename — insert the key above the closing ---).
  3. Add a ## Brief Quality section near the end with the failing dimensions and their detail arrays from the final review, formatted:
    ## Brief Quality
    
    Review loop exhausted after 3 iterations. The following dimensions
    did not reach the pass threshold:
    
    - **Research Plan (score 2/5):** Topic 'JWT library' missing
      Required-for-plan-steps field.
    - **Testability (score 3/5):** Success criterion "works correctly"
      is not falsifiable.
    
    Downstream planning will treat these as reduced-confidence areas.
    
  4. Break the loop and proceed to Step 4g.

Step 4f — Force-stop handling

If during any AskUserQuestion in Step 4e the user says "stop", "skip", "enough", "just write it", or similar, do NOT exit the loop immediately. Instead, surface the current review findings in plain text:

Brief-reviewer would flag these issues:
- Research Plan (score 2/5): Topic 'JWT library choice' missing Required-for-plan-steps field.
- Testability (score 3/5): Success criterion "works correctly" is not falsifiable.

Continue anyway? The plan will have lower confidence in these areas.

Then ask via AskUserQuestion:

Option Action
Answer one more follow-up Return to Step 4e with the current weakest-dimension question.
Stop now (accept partial brief) Finalize brief with brief_quality: partial and the ## Brief Quality section (same path as iteration-cap exhaustion). Break loop.

The force-stop path is distinct from a silent iteration cap: the user sees exactly which dimensions are weak and chooses informed.

Step 4g — Finalize

After the loop exits (pass, cap, or force-stop), ensure:

  • brief.md exists at {PROJECT_DIR}/brief.md.
  • brief.md.draft no longer exists.
  • If the loop ended without a clean pass, frontmatter contains brief_quality: partial and a ## Brief Quality section exists.
  • If the loop ended with a clean pass, brief_quality is either absent or set to complete.

Populate the "How to continue" footer with the actual project path and topic questions.

Schema sanity check (since v3.1.0): before reporting, run the brief validator. This catches frontmatter typos and state-machine inconsistencies the brief-reviewer rubric does not check (e.g. research_status: skipped with research_topics: 3 and no brief_quality: partial).

node ${CLAUDE_PLUGIN_ROOT}/lib/validators/brief-validator.mjs --json "{PROJECT_DIR}/brief.md"

If the validator returns errors, report them to the user and offer to re-enter Phase 4 with the validator's hints in scope. If only warnings, note them in the final report.

Render to HTML + link (annotation via /playground): after brief.md is final, render it to a self-contained HTML view in the same directory:

node ${CLAUDE_PLUGIN_ROOT}/scripts/render-artifact.mjs "{PROJECT_DIR}/brief.md"

This writes {PROJECT_DIR}/brief.html — zero-network, design-system-styled (frontmatter folded into a <details> block). If it exits non-zero, surface a one-line warning and continue — the rendered view is a convenience, not a gate.

Report:

Brief written: {PROJECT_DIR}/brief.md
Brief rendered: file://{abs path to brief.html}
Review iterations: {1..3}
Final quality: {complete | partial}
Validator: {PASS | warnings(N)}
Research topics identified: {N}

To annotate: open brief.html, then run the `/playground` plugin
(document-critique template) on brief.md and paste the generated
prompt back here. Claude revises brief.md freehand from your notes.

Phase 5 — Auto-orchestration opt-in (if research_topics > 0)

Skip this phase if research_topics = 0. Proceed directly to Phase 6.

Ask the user via AskUserQuestion:

Question: "You have {N} research topic(s). How do you want to proceed?"

Option Description
Manual (default) Print the commands. You run /trekresearch and /trekplan yourself, choosing depth per topic.
Auto (managed by Claude Code) I run all {N} research topics sequentially in foreground, then automatically trigger /trekplan when research completes. This session blocks until the plan is ready.

Manual path (default)

Output:

## Brief complete

Project: {PROJECT_DIR}/
Brief:   {PROJECT_DIR}/brief.md
Research topics: {N}

Next steps (run in order or parallel):

{For each topic:}
  /trekresearch --project {PROJECT_DIR} --external "{topic question}"

Then:
  /trekplan --project {PROJECT_DIR}

Then:
  /trekexecute --project {PROJECT_DIR}

Stop. Do not continue to Phase 6.

Auto path

Set auto_research: true in the brief's frontmatter (edit the file).

Emit the brief-approved lifecycle event so downstream observability sees the pipeline kick off (consumed by lib/stats/event-emit.mjs):

node ${CLAUDE_PLUGIN_ROOT}/lib/stats/event-emit.mjs \
  --event brief-approved \
  --payload "{\"project\":\"${PROJECT_DIR}\"}"

If gates_mode == true: pause here via AskUserQuestion — "Auto-mode confirmed. Proceed to research now? (yes/no)". If the user answers no, fall back to the manual path output and stop. Otherwise proceed to Phase 6.

If gates_mode == false (default in auto): proceed directly to Phase 6. The chain stops only at the main-merge gate (see commands/trekexecute.md Phase 8).

Proceed to Phase 6.

Phase 6 — Auto research dispatch (auto path only)

Runs only when user opted into auto mode.

Step 6a — Confirm proceed

Tell the user auto mode will run in foreground and block the session, then confirm via AskUserQuestion:

Question: "Auto mode runs {N} research topic(s) sequentially and then the plan — all in foreground. This session blocks until the plan is ready. Continue?"

Option Action
Continue — auto Proceed.
Cancel — do manual Revert to manual path (print commands, stop).

If cancelled → fall back to manual path output and stop.

Step 6b — Run research topics sequentially (inline)

Set research_status: in_progress in the brief's frontmatter.

For each research topic (index i = 1 .. N), invoke /trekresearch inline in this main-context session:

/trekresearch --project {PROJECT_DIR} {--external | --local | (none)} "{topic i question}"

Pass the scope flag that matches the topic's scope hint. Wait for each invocation to finish writing the research brief at {PROJECT_DIR}/research/{NN}-{topic-slug}.md before moving to the next topic.

Why sequential inline instead of parallel background? Background orchestrator-agents cannot spawn the research swarm — the Claude Code harness does not expose the Agent tool to sub-agents, so a background run silently degrades to single-context reasoning without WebSearch / Tavily / WebFetch / Gemini (see v2.4.0 release notes). Running each research pass inline in main context keeps the swarm intact. For true parallel execution, use claude -p invocations in separate terminal windows.

Step 6c — Verify all briefs landed

After the last topic completes, verify each research brief file exists:

ls -1 {PROJECT_DIR}/research/*.md | wc -l

Expected count: N. If any are missing, report and ask the user how to proceed (retry, skip missing topic, cancel).

Update brief frontmatter: research_status: complete.

Step 6d — Auto-trigger planning (inline foreground)

Invoke the planning command inline in this session:

/trekplan --project {PROJECT_DIR}

The planning pipeline runs all phases (exploration, synthesis, review) in main context. Wait for the plan to be written to {PROJECT_DIR}/plan.md before continuing.

Step 6e — Report completion

When the planning-orchestrator finishes, present:

## Ultrabrief + Ultraresearch + Voyage Complete (auto mode)

**Project:** {PROJECT_DIR}/
**Brief:** {PROJECT_DIR}/brief.md
**Research briefs:** {N} in {PROJECT_DIR}/research/
**Plan:** {PROJECT_DIR}/plan.md

### Pipeline summary

| Step | Status |
|------|--------|
| Brief | Complete ({interview_turns} interview turns) |
| Research | Complete ({N} topics, sequential foreground) |
| Plan | Complete ({steps} steps, critic: {verdict}) |

Next:
  /trekexecute --project {PROJECT_DIR}

Or:
  /trekexecute --dry-run --project {PROJECT_DIR}   # preview
  /trekexecute --validate --project {PROJECT_DIR}  # schema check

Phase 7 — Stats tracking

Append one record to ${CLAUDE_PLUGIN_DATA}/trekbrief-stats.jsonl:

{
  "ts": "{ISO-8601}",
  "task": "{task description (first 100 chars)}",
  "slug": "{slug}",
  "mode": "{default | quick}",
  "interview_turns": {N},
  "review_iterations": {1..3},
  "brief_quality": "{complete | partial}",
  "research_topics": {N},
  "auto_research": {true | false},
  "auto_result": "{completed | cancelled | failed | manual}",
  "project_dir": "{path}"
}

If ${CLAUDE_PLUGIN_DATA} is not set or not writable, skip silently. Never let stats failures block the workflow.

Profile (v4.1)

Accepts --profile <name> where <name> is one of economy, balanced, premium, or a custom profile under voyage-profiles/. Default: premium.

Resolution order (per lib/profiles/resolver.mjs):

  1. --profile flag (source: flag)
  2. VOYAGE_PROFILE environment variable (source: env)
  3. premium default (source: default)

Examples:

/trekbrief --profile economy
VOYAGE_PROFILE=balanced /trekbrief

Stats records emit profile, phase_models, and profile_source so operators can audit which profile drove which session.

Hard rules

  1. Interactive only. This command requires user input. There is no --fg or background mode — the interview cannot run headless.
  2. Brief is the contract. Every section must have substantive content or an explicit "Not discussed" note. No empty sections.
  3. Intent is load-bearing. Do not accept a one-line intent. Expand with the user until motivation is clear — the plan and every review agent will trace decisions back to this.
  4. Research topics must be answerable. Each topic's research question must be phrased so /trekresearch can answer it. If a topic is too vague, split or reformulate before writing.
  5. Never invent research topics the user did not agree to. Topics come from the interview. If the user says "I know this", respect it.
  6. Project dir is the single source of truth. Every artifact (brief, research briefs, plan, progress) lives in one project directory. Never scatter files across .claude/research/, .claude/plans/, etc.
  7. Auto mode blocks foreground. If the user opts into auto, this session waits for research + planning to complete. Document this in the opt-in question.
  8. Quality gates, not question counts. Phase 3 and Phase 4 are quality-gated loops; do not enforce a hard cap on interview questions. The brief-review gate (Phase 4) caps at 3 review iterations to bound cost, but Phase 3 has no cap — required-section content drives exit.
  9. Never write brief.md while the review gate is still pending. Draft lives in brief.md.draft until the loop terminates. A caller that sees brief.md must be able to trust that Phase 4 finished.
  10. Privacy: never log prompt text, secrets, or credentials.