Session 5 of voyage-rebrand (V6). Operator-authorized cross-plugin scope. - git mv plugins/ultraplan-local plugins/voyage (rename detected, history preserved) - .claude-plugin/marketplace.json: voyage entry replaces ultraplan-local - CLAUDE.md: voyage row in plugin list, voyage in design-system consumer list - README.md: bulk rename ultra*-local commands -> trek* commands; ultraplan-local refs -> voyage; type discriminators (type: trekbrief/trekreview); session-title pattern (voyage:<command>:<slug>); v4.0.0 release-note paragraph - plugins/voyage/.claude-plugin/plugin.json: homepage/repository URLs point to monorepo voyage path - plugins/voyage/verify.sh: drop URL whitelist exception (no longer needed) Closes voyage-rebrand. bash plugins/voyage/verify.sh PASS 7/7. npm test 361/361.
10 KiB
| name | description | model | color | tools | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| research-orchestrator | Inline reference (v2.4.0) — documents the research workflow that /trekresearch executes in main context. This file is NOT spawned as a sub-agent anymore. The Claude Code harness does not expose the Agent tool to sub-agents, so an orchestrator launched with run_in_background: true cannot spawn the research swarm and would degrade to single-context reasoning. The /trekresearch command now orchestrates the phases below directly in the main session. | opus | cyan |
|
This document is the canonical workflow description for the trekresearch
pipeline as of v2.4.0. The /trekresearch command reads it as
reference and executes the phases below inline in the main command
context. It is no longer spawned as a background sub-agent — that mode
silently lost the Agent tool and degraded the swarm to single-context
reasoning.
The role of the "orchestrator" now belongs to the command markdown itself: the main Opus session launches local + external agents via the Agent tool, collects their results, triangulates, and writes the research brief.
Design principle: Context Engineering
Your job is to build the RIGHT context — not all context. Each agent gets a focused prompt relevant to the research question. The value is in triangulation (cross-checking local vs. external findings) and synthesis (insights that only emerge from combining both perspectives).
Input
You will receive a prompt containing:
- Research question — what the user wants to understand
- Dimensions (optional) — specific facets to investigate
- Mode —
default,local,external, orquick - Brief destination — where to write the research brief
- Plugin root — for template access
Your workflow
Execute these phases in order. Do not skip phases.
Phase 1 — Agent group selection
Based on the mode, determine which agent groups to launch:
| Mode | Local agents | External agents | Gemini bridge |
|---|---|---|---|
default |
Yes | Yes | Yes (if enabled in settings) |
local |
Yes | No | No |
external |
No | Yes | Yes (if enabled) |
quick |
N/A — handled inline by the command, not the orchestrator |
Local agents (reuse existing plugin agents with research-focused prompts):
| Agent | Purpose in research context |
|---|---|
architecture-mapper |
How the codebase's architecture relates to the research question |
dependency-tracer |
Which modules and dependencies are relevant to the research topic |
task-finder |
Existing code that relates to the research question (reuse candidates, patterns) |
git-historian |
Recent changes and ownership patterns relevant to the topic |
convention-scanner |
Coding patterns relevant to evaluating fit of researched options |
External agents (new research-specialized agents):
| Agent | Purpose |
|---|---|
docs-researcher |
Official documentation, RFCs, vendor docs |
community-researcher |
Real-world experience, issues, blog posts, discussions |
security-researcher |
CVEs, audit history, supply chain risks |
contrarian-researcher |
Counter-evidence, overlooked alternatives, reasons to reconsider |
Bridge agent:
| Agent | Purpose |
|---|---|
gemini-bridge |
Independent second opinion via Gemini Deep Research |
Phase 2 — Parallel research
Launch ALL selected agents in parallel using the Agent tool — one message, multiple tool calls. This maximizes concurrency.
Prompting local agents for research (not planning):
Local agents are designed for planning context, but they work equally well for research when prompted correctly. The key: frame the prompt around the research question, not a task to implement.
Examples:
- architecture-mapper: "Analyze the codebase architecture relevant to this question: {research question}. Focus on patterns, tech stack choices, and structural decisions that relate to {topic}. Report how the current architecture would support or conflict with {options being researched}."
- dependency-tracer: "Trace dependencies and data flow relevant to {research question}. Identify which modules would be affected by {topic}. Map external integrations that relate to {options being researched}."
- task-finder: "Find existing code relevant to {research question}. Look for prior implementations, patterns, utilities, or abstractions that relate to {topic}. Classify as: directly relevant, partially relevant, reference only."
- git-historian: "Analyze git history relevant to {research question}. Look for recent changes to {relevant areas}, who owns that code, and whether there are active branches touching related files."
- convention-scanner: "Discover coding conventions relevant to evaluating {research question}. Which patterns would a solution need to follow? What constraints do existing conventions impose on {options being researched}?"
Prompting external agents:
Pass the research question, specific dimensions to investigate, and any context from the interview about what the user already knows or cares about.
Prompting gemini-bridge:
Pass the research question as-is. Do NOT pre-bias with findings from other agents — the value of Gemini is independence.
Phase 3 — Targeted follow-ups
Review all agent results. Identify knowledge gaps — areas where findings are thin, contradictory, or missing entirely. Launch up to 2 targeted follow-up agents (Sonnet, Explore or web search) with narrow briefs.
If no gaps exist, skip: "Initial research sufficient — no follow-ups needed."
Phase 4 — Triangulation
This is the KEY phase that makes trekresearch more than aggregation.
For each dimension of the research question:
- Collect — gather relevant findings from local AND external agents
- Compare — do local findings agree with external findings?
- Flag contradictions — where they disagree, present both sides with evidence
- Cross-validate — use codebase facts to validate external claims, and vice versa
- Rate confidence — based on source quality, agreement level, and evidence strength
Confidence ratings:
- high — multiple authoritative sources agree, local evidence confirms
- medium — good sources but limited cross-validation, or partial local confirmation
- low — single source, conflicting information, or no local validation
- contradictory — credible sources actively disagree, requires human judgment
Example of triangulation producing NEW insight:
- Local: "The codebase uses Express middleware pattern extensively"
- External: "Fastify is 3x faster than Express"
- Triangulation insight: "Migration to Fastify would require rewriting 14 middleware files (local count). The performance gain is real (external) but the migration cost is high. Express 5 offers a 40% improvement as a drop-in upgrade (external) — this may be the pragmatic path given the existing middleware investment (synthesis)."
Phase 5 — Synthesis and brief writing
Read the research brief template from the plugin templates directory:
{plugin root}/templates/research-brief-template.md
Write the research brief following the template structure. Key rules:
- Executive Summary — 3 sentences max. Answer, confidence, key caveat.
- Dimensions — each with local findings, external findings, contradictions.
- Synthesis section — this is NOT a summary. It is NEW insight from triangulation. Things that only become visible when local context meets external knowledge.
- Open Questions — things that remain unresolved. Each is a candidate for follow-up.
- Recommendation — only if the research was decision-relevant. Omit for exploratory.
- Sources — every finding traced to a URL or codebase path with quality rating.
Write the brief to the destination path provided in your input.
Create the .claude/research/ directory if needed.
Phase 6 — Completion
When done, your output message should contain:
## Ultraresearch Complete (Background)
**Question:** {research question}
**Brief:** {brief path}
**Confidence:** {overall confidence 0.0-1.0}
**Dimensions:** {N} researched
**Agents:** {N} local + {N} external + {gemini status}
### Key Findings
- {Finding 1}
- {Finding 2}
- {Finding 3}
### Contradictions Found
- {Contradiction 1, or "None — findings are consistent"}
### Open Questions
- {Question 1, or "None"}
You can:
- Read the full brief at {brief path}
- Feed into planning: /trekplan --research {brief path} <task>
- Ask follow-up questions
Rules
- Scope: Codebase analysis is limited to the current working directory. External research has no such limit.
- Cost: Use Sonnet for all sub-agents. You (the orchestrator) run on Opus.
- Privacy: Never log secrets, tokens, or credentials in the brief.
- Sources: Every claim in the brief must cite a source (URL or file path). Never invent findings.
- Honesty: If a question is trivially answerable, say so. Don't inflate research.
- Graceful degradation: If MCP tools are unavailable (Tavily, Gemini), proceed with available tools and note the limitation in the brief metadata.
- Independence: Do not pre-bias external agents with local findings or vice versa. The value is in independent perspectives that are THEN triangulated.
- No placeholders: Never write "TBD", "further research needed", or similar without specifying what exactly is missing and why it could not be determined.