34 KiB
| name | description | argument-hint | model | allowed-tools |
|---|---|---|---|---|
| trekbrief | Interactive interview that produces a task brief with explicit research plan. Feeds /trekresearch and /trekplan. Optionally orchestrates the full pipeline end-to-end. | [--quick] <task description> | opus | Agent, Read, Glob, Grep, Write, Edit, Bash, AskUserQuestion |
Ultrabrief Local v2.1
Interactive requirements-gathering command. Produces a task brief — a
structured markdown file that declares intent, goal, constraints, and an
explicit research plan with copy-paste-ready /trekresearch commands.
Pipeline position:
/trekbrief → brief.md (this command)
/trekresearch --project <dir> → research/*.md
/trekplan --project <dir> → plan.md
/trekexecute --project <dir> → execution
The brief is the contract between the user's intent and /trekplan.
Every decision the plan makes must trace back to content in the brief.
This command is always interactive. There is no background mode — the interview requires user input. After the brief is written, the command optionally orchestrates the rest of the pipeline (research + plan) in foreground if the user opts in.
Phase 1 — Parse mode and validate input
Parse $ARGUMENTS:
-
If arguments start with
--quick: set mode = quick. The interview starts more compactly (fewer opening probes per section) but still escalates automatically if quality gates fail. There is no hard cap on question count — quality drives the loop, not a counter. Strip the flag; remainder is the task description. -
Otherwise: mode = default. Interview probes each section until the completeness gate (Phase 3) and brief-review gate (Phase 4) both pass.
-
--gatesflag (autonomy control, may combine with any mode): when present, setgates_mode = true. This re-enables approval pauses at every phase boundary in the downstream pipeline (research, plan, execute) and at every wave in the executor. Defaultgates_mode = falsemeans auto mode runs continuously until the main-merge gate (which is the one boundary that ALWAYS pauses, regardless ofgates_mode). Strip the flag from$ARGUMENTSbefore further parsing. The flag is consumed by the autonomy-gate state machine via the CLI shim:node ${CLAUDE_PLUGIN_ROOT}/lib/util/autonomy-gate.mjs --state X --event Y --gates {true|false}.
If no task description is provided, output usage and stop:
Usage: /trekbrief <task description>
/trekbrief --quick <task description>
Modes:
default Dynamic interview until quality gates pass — brief with research plan
--quick Compact start; still escalates on weak sections — brief with research plan
Examples:
/trekbrief Add user authentication with JWT tokens
/trekbrief --quick Add rate limiting to the API
/trekbrief Migrate from Express to Fastify
Report:
Mode: {default | quick}
Task: {task description}
Phase 2 — Generate slug and create project directory
Generate a slug from the task description: first 3-4 meaningful words,
lowercase, hyphens. Example: "Migrate from Express to Fastify" → fastify-migration.
Set today's date as YYYY-MM-DD (UTC).
Create the project directory:
PROJECT_DIR=".claude/projects/{YYYY-MM-DD}-{slug}"
mkdir -p "$PROJECT_DIR/research"
Report:
Project directory: .claude/projects/{YYYY-MM-DD}-{slug}/
If the directory already exists and is non-empty, warn and ask:
"Directory {path} exists. Overwrite, reuse (keep existing files), or pick new slug?"
Use AskUserQuestion with three options. If "pick new slug", ask for a
new slug and restart Phase 2.
Phase 3 — Completeness loop
Phase 3 is a section-driven completeness loop. Instead of a numbered question list, maintain an internal state of brief sections and keep asking until every required section has substantive content. Quality drives the loop — there is no hard cap on question count.
Use AskUserQuestion for every question. Ask one question at a time.
Never dump multiple questions.
Internal state
Track this structure in memory as the loop runs:
state = {
intent: { content: "", probes: 0 }, # required
goal: { content: "", probes: 0 }, # required
success_criteria: { content: [], probes: 0 }, # required
research_plan: { topics: [], probes: 0 }, # required
non_goals: { content: [], probes: 0 }, # optional
constraints: { content: [], probes: 0 }, # optional
preferences: { content: [], probes: 0 }, # optional
nfrs: { content: [], probes: 0 }, # optional
prior_attempts: { content: "", probes: 0 }, # optional
question_history: [] # list of questions asked
}
content is raw user answers merged; probes is how many times this
section has been asked; question_history prevents re-asking the same
variant twice.
Required sections (initial-signal gate)
Four sections MUST have substantive content before exiting Phase 3:
- Intent — full sentence or paragraph (not a single word or phrase)
- Goal — full sentence or paragraph
- Success Criteria — at least one concrete, testable item
- Research Plan — either ≥ 1 topic probed, OR the user has explicitly confirmed "no external research needed"
"Substantive" means: non-empty, not a trivial one-word reply, not "I don't know" without a recorded assumption. The strict falsifiability check happens in Phase 4 (brief-review gate); Phase 3 is just the initial-signal bar.
Optional sections (Non-Goals, Constraints, Preferences, NFRs, Prior Attempts) do not gate exit. If they remain empty after the required sections pass, they will be recorded as "Not discussed — no constraints assumed" in Phase 4's draft.
Question bank (per section)
Pick the next question from the section's bank based on content and
probes. Wording must stay conversational — only the selection is
section-driven, not the tone.
Intent (required):
- Anchor (probes=0, content empty): "Why are we doing this? What is the motivation, the user need, or the strategic context behind the task?"
- Follow-up (probes≥1, content present but shallow): "What happens if we do nothing? Who is affected?"
- Sharpen (user mentioned a symptom): "You mentioned {X}. Is {X} the symptom or the underlying cause?"
Goal (required):
- Anchor: "Describe the end state in 1–3 sentences — specific enough to disagree with."
- Follow-up: "How would you recognize this is done when looking at the UI / API / codebase?"
Success Criteria (required):
- Anchor: "How do we verify it is actually done? List 2–4 concrete, testable conditions — commands to run, observations, or metrics."
- Sharpen (criterion is vague): "'{quoted criterion}' is subjective. Which command, observation, or metric would prove this is met?"
- Quantify (performance/quality claim): "You mentioned it should be {fast/reliable/secure}. What number or threshold counts as success?"
Research Plan (required, strictest):
- Anchor (no topics yet): "Are there technologies, libraries, or decisions in this task you do not have solid current knowledge of? Examples might be library choice, a protocol, or a security pattern."
- Per-topic sharpen (topic exists but incomplete): "For topic '{title}': which parts of the plan depend on the answer? What confidence level do you need — high, medium, or low?"
- Scope question: "Is '{topic}' answerable from the existing codebase, from external docs, or both?"
- Confirm none (user refuses all topics): "Confirming: no external research needed — you already know everything the plan will depend on?"
Non-Goals (optional):
- Anchor: "What is explicitly NOT in scope? This prevents scope-guardian from flagging gaps for things we deliberately don't do."
Constraints (optional):
- Anchor: "Technical, time, or resource constraints the plan must respect? Dependencies, compatibility, deadlines, or budget."
- Sharpen: "You mentioned {deadline / budget / compatibility}. Is it hard or guidance?"
Preferences (optional):
- Anchor: "Preferences for libraries, patterns, or architectural style?"
NFRs (optional):
- Anchor: "Performance, security, accessibility, or scalability targets? Quantified wherever possible."
Prior Attempts (optional):
- Anchor: "Has this been attempted before? What worked or failed?"
Selection rule
On each loop iteration:
- Compute the next section to probe:
- If any required section is below the initial-signal gate → pick the weakest required section in this priority order: Intent → Goal → Success Criteria → Research Plan.
- Else if an optional section is clearly missing and likely material to scope (heuristic: the task description hints at constraints or NFRs) → probe it at most once.
- Else: exit Phase 3.
- Within the chosen section, pick the question variant:
- If
probes == 0and content is empty → Anchor. - If content exists but is shallow → Follow-up or Sharpen.
- If the section is Research Plan and topics exist → iterate per-topic sharpen across incomplete topics.
- If
- Ensure the exact question is NOT already in
question_history. If it is, pick the next variant or skip to the next weakest section. - Ask via
AskUserQuestion. Append question to history. Increment probes. - Record the answer into
content. Never overwrite — merge.
Research topic identification
As the user answers Intent, Goal, or Success Criteria, listen for:
- Unfamiliar technologies — libraries, frameworks, protocols not clearly present in the codebase
- Version upgrades — migrating to a new major version
- Security-sensitive decisions — auth, crypto, data handling
- Architectural choices — pattern X vs Y, library A vs B
- Unknown integrations — third-party APIs, external services
- Compliance / legal — GDPR, accessibility, industry regulations
When you hear one, add a candidate topic to research_plan.topics with
only a title and why-it-matters. Probe it on the next Research Plan
iteration using the per-topic sharpen question to fill in:
- Research question (must end in
?) - Required for plan steps
- Scope (local / external / both)
- Confidence needed (high / medium / low)
- Estimated cost (quick / standard / deep)
If the user says "I know this" to a candidate topic, remove it from the
list. Trust the user. If no topics emerge after probing, the user confirms
"no external research needed" → research_plan gate passes with 0 topics.
Quick mode adjustments
If mode = quick:
- For optional sections, cap probes at 1 each. Do not revisit optional sections during Phase 3.
- Required sections still have no probe cap — quality gate still applies.
- Prefer Anchor variants over Sharpen on the first pass.
Force-stop path
If the user says "skip", "stop asking", "just proceed", "enough", or similar, break the loop immediately:
- Mark any required sections still below the initial-signal gate as
{ incomplete_forced_stop: true }in state. - Proceed to Phase 4 with a note that the brief will carry a reduced confidence flag.
Exit condition
Exit Phase 3 when:
- All four required sections meet the initial-signal gate, OR
- The user has force-stopped.
Report:
Phase 3 complete: {N} questions asked across {M} sections.
Proceeding to draft and review.
Phase 3.5 — Per-phase effort dialog
Phase 3.5 is the v5.1 entry-point for adaptive-depth execution. After
Phase 3 has gathered intent / goal / success criteria / research plan, the
operator commits an effort level and (optional) model per downstream phase
(research, plan, execute, review). The committed signals are written
to brief frontmatter as a phase_signals: list that the four downstream
commands read via their ## Composition rule (v5.1) section.
State requirements
Before entering Phase 3.5 the following must be populated:
state.intent— the Phase 3 Intent answer (1+ paragraph)state.goal— the Phase 3 Goal answerstate.success_criteria— at least one falsifiable SCstate.research_plan.topics— list (may be empty)
If any are absent: skip Phase 3.5 entirely and write phase_signals_partial: true to the draft frontmatter. Do not block.
--quick mode
If the operator launched with --quick: skip Phase 3.5 entirely and
auto-write phase_signals_partial: true to draft frontmatter. The brief
will satisfy the v5.1 sequencing gate without going through the dialog.
Default-derivation heuristic (LLM judgment, not algorithmic)
Before each phase question, propose a default tier marked (default). Use
these signals — they are weak heuristics, not rules:
research_topics_count→ high (high), low (low), absent (low)sc_count(count of falsifiable SCs) → high (≥6 ⇒high), low (≤2 ⇒low)- Goal complexity: keywords like "rewrite", "migration", "refactor across",
"new platform" ⇒
high; "typo", "small bugfix", "docs touch-up" ⇒low - Otherwise:
standard
Mix these into one proposed default per phase. Document the proposed tier in the question body so the operator sees why it was picked.
The loop — 4 tier-coupled AskUserQuestion calls
Loop over [research, plan, execute, review] in order. For each phase,
issue one AskUserQuestion with 3 options:
| Option | Maps to phase_signals entry |
|---|---|
| Low effort | {phase: <name>, effort: low, model: sonnet} |
| Standard (default) | {phase: <name>, effort: standard} (model omitted — composition falls through to profile) |
| High effort | {phase: <name>, effort: high, model: opus} |
The proposed tier per phase (from the default-derivation heuristic) MUST be
labelled (default) in the option list so the operator can one-click
accept. Commit the chosen tier immediately to an in-memory effort_state
dict — no bulk summary-before-commit. The loop is interruptible.
The mapping table is canonical:
low → {effort: low, model: sonnet}(force sonnet for the low-cost path)standard → {effort: standard}(model omitted; composition rule resolves via profile)high → {effort: high, model: opus}(force opus for the high-confidence path)
Force-stop handling
If during any of the four AskUserQuestion calls the operator says "stop",
"skip", "enough", "just write it", or similar, do NOT exit silently — apply
the Phase 4f force-stop pattern verbatim:
You stopped before committing per-phase signals. Remaining phases:
- {list of phases not yet answered}
The brief will still be valid (v5.1 supports `phase_signals_partial: true`
as a force-stop record). Downstream commands will fall back to the profile
resolver for the un-committed phases.
Continue anyway?
Then AskUserQuestion:
| Option | Action |
|---|---|
| Answer one more phase | Return to the next un-answered phase question. |
| Stop now (record partial) | Drop any in-progress effort_state and set phase_signals_partial: true in draft frontmatter. Mutually exclusive with phase_signals. Break Phase 3.5. |
This pattern matches Step 4f (line 436-458) so the force-stop UX is identical across both surfaces.
Hand-off to Phase 4a
If effort_state is fully populated (4 commits, no force-stop): write a
phase_signals: block to draft frontmatter — one entry per phase,
preserving the canonical-mapping form above. Omit model: for standard
tier (composition falls through to profile).
If phase_signals_partial: true was set: write that single line to draft
frontmatter and skip the phase_signals: block (mutually exclusive per
validator).
Phase 4a (Step 4a — Draft in memory) reads from effort_state /
phase_signals_partial and incorporates the appropriate frontmatter block
into the draft brief.
Sequencing gate (downstream)
brief_version: 2.1 activates the validator's sequencing gate. If the
final brief reaches /trekplan, /trekresearch, /trekexecute, or
/trekreview WITHOUT phase_signals and WITHOUT phase_signals_partial: true, the validator emits BRIEF_V51_MISSING_SIGNALS and the command
halts with a friendly hint pointing back to /trekbrief.
Phase 4 — Draft, review, and revise
Phase 4 runs a draft → brief-reviewer → revise loop. The draft is
not written to disk until the brief-review quality gate passes (or the
iteration cap is hit). This ensures the brief that reaches /trekplan
has already survived a critical review.
Read the brief template first:
@${CLAUDE_PLUGIN_ROOT}/templates/trekbrief-template.md
Loop bound
Maximum 3 review iterations. This bounds cost in the worst case while leaving room for two rounds of targeted follow-ups.
Iteration step-by-step
Step 4a — Draft in memory
Build the brief text from Phase 3 state by filling the template:
- Frontmatter: populate
task,slug,project_dir,research_topics(count of topics),research_status: pending,auto_research: false(will update in Phase 5 if user opts in),interview_turns(total questions asked across Phase 3 + Phase 4),source: interview. - Intent: expand the user's motivation into 3–5 sentences. Load-bearing.
- Goal: concrete end state.
- Non-Goals: from state, or "- None explicitly stated" bullet if empty.
- Constraints / Preferences / NFRs: from state, or "Not discussed — no constraints assumed" note if empty.
- Success Criteria: falsifiable commands/observations from state.
- Research Plan: one
### Topic N: {title}section per topic with the full structure from the template. If 0 topics: write the "No external research needed — user confirmed solid knowledge of all plan dependencies" note. - Open Questions / Assumptions: from any
"I don't know"answers recorded during Phase 3, plus implicit gaps. - Prior Attempts: from state, or "None — fresh task."
Step 4b — Write draft to disk
Write the draft to {PROJECT_DIR}/brief.md.draft (not brief.md — the
final file is only written after the gate passes).
Step 4c — Launch brief-reviewer
Launch the brief-reviewer agent (foreground, blocking) with the prompt:
"Review this task brief for quality:
{PROJECT_DIR}/brief.md.draft. Check completeness, consistency, testability, scope clarity, and research-plan validity. Report findings, verdict, and the required machine-readable JSON block."
Step 4d — Parse JSON scores
Parse the agent's output. Locate the last fenced json block.
Extract per-dimension scores:
review = {
completeness: { score, gaps },
consistency: { score, issues },
testability: { score, weak_criteria },
scope_clarity: { score, unclear_sections },
research_plan: { score, invalid_topics },
verdict: "PROCEED | PROCEED_WITH_RISKS | REVISE"
}
JSON fallback: if the JSON block is missing, invalid, or a dimension
is missing, treat all dimensions as score: 3 and set the verdict from
the prose verdict if present, otherwise PROCEED_WITH_RISKS. Emit an
internal note that the reviewer output was degraded. This ensures the
loop never deadlocks on a parser error.
Step 4e — Gate evaluation
The gate passes when all of the following are true:
completeness.score ≥ 4consistency.score ≥ 4testability.score ≥ 4scope_clarity.score ≥ 4research_plan.score == 5
(Research Plan requires a perfect score because its format is checked
mechanically: ends in ?, Required for plan steps filled, scope is
one of local | external | both, confidence is high | medium | low.
Anything less means at least one topic is malformed and planning will
stumble.)
If gate passes:
- Move
brief.md.draft→brief.md(atomic rename). - Delete the draft file if rename is not possible on the OS; write
brief.mdfresh. - Break the loop and proceed to Step 4g.
If gate fails AND iteration count < 3:
- Identify the weakest dimension (lowest score; tie broken by priority: research_plan > testability > completeness > consistency > scope_clarity).
- Generate a targeted follow-up question from the dimension's detail
field (gaps / issues / weak_criteria / unclear_sections / invalid_topics).
Example generators:
completeness.gaps: ["Non-Goals empty, unclear if deliberate"]→ "You did not specify anything out-of-scope. Is that deliberate, or are there things we should explicitly exclude?"testability.weak_criteria: ["'system should be fast'"]→ "'System should be fast' is not falsifiable. Which metric or threshold proves this is met — e.g., p95 < 200ms, or throughput ≥ X requests/sec?"research_plan.invalid_topics: [{"topic":"JWT","issue":"Required for plan steps empty"}]→ "For research topic 'JWT': which plan steps depend on the answer? Give one or two concrete kinds of step (e.g., 'library selection', 'threat model', 'migration strategy')."
- Ask via
AskUserQuestion. Record the answer into Phase 3 state. - Return to Step 4a with incremented iteration count. The reviewer sees an updated draft, so you MUST re-read the brief and regenerate the review each iteration — do not reuse stale scores.
- When launching the reviewer on iteration 2 or 3, include prior
questions in the prompt so it does not produce circular follow-ups:
"Questions already asked during this interview: {list from question_history}. Focus on issues that remain after those answers — do not re-raise gaps that have already been addressed."
If gate fails AND iteration count == 3 (loop exhausted):
- Move
brief.md.draft→brief.md. - Add
brief_quality: partialto the frontmatter (edit the file post-rename — insert the key above the closing---). - Add a
## Brief Qualitysection near the end with the failing dimensions and theirdetailarrays from the final review, formatted:## Brief Quality Review loop exhausted after 3 iterations. The following dimensions did not reach the pass threshold: - **Research Plan (score 2/5):** Topic 'JWT library' missing Required-for-plan-steps field. - **Testability (score 3/5):** Success criterion "works correctly" is not falsifiable. Downstream planning will treat these as reduced-confidence areas. - Break the loop and proceed to Step 4g.
Step 4f — Force-stop handling
If during any AskUserQuestion in Step 4e the user says "stop", "skip",
"enough", "just write it", or similar, do NOT exit the loop immediately.
Instead, surface the current review findings in plain text:
Brief-reviewer would flag these issues:
- Research Plan (score 2/5): Topic 'JWT library choice' missing Required-for-plan-steps field.
- Testability (score 3/5): Success criterion "works correctly" is not falsifiable.
Continue anyway? The plan will have lower confidence in these areas.
Then ask via AskUserQuestion:
| Option | Action |
|---|---|
| Answer one more follow-up | Return to Step 4e with the current weakest-dimension question. |
| Stop now (accept partial brief) | Finalize brief with brief_quality: partial and the ## Brief Quality section (same path as iteration-cap exhaustion). Break loop. |
The force-stop path is distinct from a silent iteration cap: the user sees exactly which dimensions are weak and chooses informed.
Step 4g — Finalize
After the loop exits (pass, cap, or force-stop), ensure:
brief.mdexists at{PROJECT_DIR}/brief.md.brief.md.draftno longer exists.- If the loop ended without a clean pass, frontmatter contains
brief_quality: partialand a## Brief Qualitysection exists. - If the loop ended with a clean pass,
brief_qualityis either absent or set tocomplete.
Populate the "How to continue" footer with the actual project path and topic questions.
Schema sanity check (since v3.1.0): before reporting, run the brief
validator. This catches frontmatter typos and state-machine inconsistencies
the brief-reviewer rubric does not check (e.g. research_status: skipped
with research_topics: 3 and no brief_quality: partial).
node ${CLAUDE_PLUGIN_ROOT}/lib/validators/brief-validator.mjs --json "{PROJECT_DIR}/brief.md"
If the validator returns errors, report them to the user and offer to re-enter Phase 4 with the validator's hints in scope. If only warnings, note them in the final report.
Build the operator-annotation HTML, then print the report. After the
brief is validated, run scripts/annotate.mjs to produce a self-contained
HTML file the operator opens in their browser. The HTML renders the brief
with line numbers, lets the operator click any line to attach their own
note (not Claude-generated suggestions — the operator drives every
annotation), keeps a sidebar of all notes, persists state in localStorage,
and exposes a "Copy Prompt" button that generates a single structured
prompt with every note. The operator copies that prompt and pastes it
back into Claude; Claude revises brief.md freehand from the notes.
ANNOT_HTML=$(node ${CLAUDE_PLUGIN_ROOT}/scripts/annotate.mjs "{PROJECT_DIR}/brief.md" 2>&1)
# stdout is the absolute path to the .html on success.
If annotate.mjs exits non-zero, surface a one-line warning and continue
— the annotation HTML is a convenience, not a gate. The report below
still mentions the (failed) path so the operator can debug.
Then print this block verbatim (substitute {PROJECT_DIR} and
$ANNOT_HTML):
Brief written: {PROJECT_DIR}/brief.md
Annotation HTML: file://{$ANNOT_HTML}
Review iterations: {1..3}
Final quality: {complete | partial}
Validator: {PASS | warnings(N)}
Research topics identified: {N}
────────────────────────────────────────────────────────────────────
To review and annotate this brief, open the HTML above in a browser:
open file://{$ANNOT_HTML}
Click any line to add YOUR OWN note. The sidebar collects every note,
the "Copy Prompt" button gathers them into one structured prompt.
Paste that prompt back into this chat and Claude revises brief.md
from your notes. Annotations persist in your browser if you close
the tab and reopen the same file.
────────────────────────────────────────────────────────────────────
Phase 5 — Auto-orchestration opt-in (if research_topics > 0)
Skip this phase if research_topics = 0. Proceed directly to Phase 6.
Ask the user via AskUserQuestion:
Question: "You have {N} research topic(s). How do you want to proceed?"
| Option | Description |
|---|---|
| Manual (default) | Print the commands. You run /trekresearch and /trekplan yourself, choosing depth per topic. |
| Auto (managed by Claude Code) | I run all {N} research topics sequentially in foreground, then automatically trigger /trekplan when research completes. This session blocks until the plan is ready. |
Manual path (default)
Output:
## Brief complete
Project: {PROJECT_DIR}/
Brief: {PROJECT_DIR}/brief.md
Research topics: {N}
Next steps (run in order or parallel):
{For each topic:}
/trekresearch --project {PROJECT_DIR} --external "{topic question}"
Then:
/trekplan --project {PROJECT_DIR}
Then:
/trekexecute --project {PROJECT_DIR}
Stop. Do not continue to Phase 6.
Auto path
Set auto_research: true in the brief's frontmatter (edit the file).
Emit the brief-approved lifecycle event so downstream observability sees
the pipeline kick off (consumed by lib/stats/event-emit.mjs):
node ${CLAUDE_PLUGIN_ROOT}/lib/stats/event-emit.mjs \
--event brief-approved \
--payload "{\"project\":\"${PROJECT_DIR}\"}"
If gates_mode == true: pause here via AskUserQuestion —
"Auto-mode confirmed. Proceed to research now? (yes/no)". If the user
answers no, fall back to the manual path output and stop. Otherwise
proceed to Phase 6.
If gates_mode == false (default in auto): proceed directly to Phase 6.
The chain stops only at the main-merge gate (see commands/trekexecute.md
Phase 8).
Proceed to Phase 6.
Phase 6 — Auto research dispatch (auto path only)
Runs only when user opted into auto mode.
Step 6a — Confirm proceed
Tell the user auto mode will run in foreground and block the session, then
confirm via AskUserQuestion:
Question: "Auto mode runs {N} research topic(s) sequentially and then the plan — all in foreground. This session blocks until the plan is ready. Continue?"
| Option | Action |
|---|---|
| Continue — auto | Proceed. |
| Cancel — do manual | Revert to manual path (print commands, stop). |
If cancelled → fall back to manual path output and stop.
Step 6b — Run research topics sequentially (inline)
Set research_status: in_progress in the brief's frontmatter.
For each research topic (index i = 1 .. N), invoke /trekresearch
inline in this main-context session:
/trekresearch --project {PROJECT_DIR} {--external | --local | (none)} "{topic i question}"
Pass the scope flag that matches the topic's scope hint. Wait for each
invocation to finish writing the research brief at
{PROJECT_DIR}/research/{NN}-{topic-slug}.md before moving to the next
topic.
Why sequential inline instead of parallel background? Background orchestrator-agents cannot spawn the research swarm — the Claude Code harness does not expose the Agent tool to sub-agents, so a background run silently degrades to single-context reasoning without WebSearch / Tavily / WebFetch / Gemini (see v2.4.0 release notes). Running each research pass inline in main context keeps the swarm intact. For true parallel execution, use
claude -pinvocations in separate terminal windows.
Step 6c — Verify all briefs landed
After the last topic completes, verify each research brief file exists:
ls -1 {PROJECT_DIR}/research/*.md | wc -l
Expected count: N. If any are missing, report and ask the user how to proceed (retry, skip missing topic, cancel).
Update brief frontmatter: research_status: complete.
Step 6d — Auto-trigger planning (inline foreground)
Invoke the planning command inline in this session:
/trekplan --project {PROJECT_DIR}
The planning pipeline runs all phases (exploration, synthesis, review) in
main context. Wait for the plan to be written to {PROJECT_DIR}/plan.md
before continuing.
Step 6e — Report completion
When the planning-orchestrator finishes, present:
## Ultrabrief + Ultraresearch + Voyage Complete (auto mode)
**Project:** {PROJECT_DIR}/
**Brief:** {PROJECT_DIR}/brief.md
**Research briefs:** {N} in {PROJECT_DIR}/research/
**Plan:** {PROJECT_DIR}/plan.md
### Pipeline summary
| Step | Status |
|------|--------|
| Brief | Complete ({interview_turns} interview turns) |
| Research | Complete ({N} topics, sequential foreground) |
| Plan | Complete ({steps} steps, critic: {verdict}) |
Next:
/trekexecute --project {PROJECT_DIR}
Or:
/trekexecute --dry-run --project {PROJECT_DIR} # preview
/trekexecute --validate --project {PROJECT_DIR} # schema check
Phase 7 — Stats tracking
Append one record to ${CLAUDE_PLUGIN_DATA}/trekbrief-stats.jsonl:
{
"ts": "{ISO-8601}",
"task": "{task description (first 100 chars)}",
"slug": "{slug}",
"mode": "{default | quick}",
"interview_turns": {N},
"review_iterations": {1..3},
"brief_quality": "{complete | partial}",
"research_topics": {N},
"auto_research": {true | false},
"auto_result": "{completed | cancelled | failed | manual}",
"project_dir": "{path}"
}
If ${CLAUDE_PLUGIN_DATA} is not set or not writable, skip silently.
Never let stats failures block the workflow.
Profile (v4.1)
Accepts --profile <name> where <name> is one of economy, balanced,
premium, or a custom profile under voyage-profiles/. Default: premium.
Resolution order (per lib/profiles/resolver.mjs):
--profileflag (source:flag)VOYAGE_PROFILEenvironment variable (source:env)premiumdefault (source:default)
Examples:
/trekbrief --profile economy
VOYAGE_PROFILE=balanced /trekbrief
Stats records emit profile, phase_models, and profile_source so operators
can audit which profile drove which session.
Hard rules
- Interactive only. This command requires user input. There is no
--fgor background mode — the interview cannot run headless. - Brief is the contract. Every section must have substantive content or an explicit "Not discussed" note. No empty sections.
- Intent is load-bearing. Do not accept a one-line intent. Expand with the user until motivation is clear — the plan and every review agent will trace decisions back to this.
- Research topics must be answerable. Each topic's research question
must be phrased so
/trekresearchcan answer it. If a topic is too vague, split or reformulate before writing. - Never invent research topics the user did not agree to. Topics come from the interview. If the user says "I know this", respect it.
- Project dir is the single source of truth. Every artifact (brief,
research briefs, plan, progress) lives in one project directory.
Never scatter files across
.claude/research/,.claude/plans/, etc. - Auto mode blocks foreground. If the user opts into auto, this session waits for research + planning to complete. Document this in the opt-in question.
- Quality gates, not question counts. Phase 3 and Phase 4 are quality-gated loops; do not enforce a hard cap on interview questions. The brief-review gate (Phase 4) caps at 3 review iterations to bound cost, but Phase 3 has no cap — required-section content drives exit.
- Never write
brief.mdwhile the review gate is still pending. Draft lives inbrief.md.draftuntil the loop terminates. A caller that seesbrief.mdmust be able to trust that Phase 4 finished. - Privacy: never log prompt text, secrets, or credentials.