ktg-plugin-marketplace/plugins/voyage/templates/trekbrief-template.md
Kjell Tore Guttormsen 916d30f63e chore(voyage): release v5.0.0 — remove bespoke playground + /trekrevise + Handover 8; render produced artifacts to HTML + link, annotate via /playground
The v4.2/v4.3 bespoke playground SPA (~388 KB), the /trekrevise command,
Handover 8 (annotation → revision), the supporting lib/ modules
(anchor-parser, annotation-digest, markdown-write, revision-guard), the
Playwright e2e suite, and the @playwright/test / @axe-core/playwright
devDeps are removed. A browser walkthrough found the playground borderline
unusable, and it duplicated the official /playground plugin's
document-critique / diff-review templates.

In their place: scripts/render-artifact.mjs — a small, zero-dependency
renderer that turns a brief/plan/review .md into a self-contained,
design-system-styled, zero-network .html (frontmatter folded into a
<details> block). /trekbrief, /trekplan, and /trekreview call it on their
last step and print the file:// link; to annotate, run /playground
(document-critique) on the .md and paste the generated prompt back.

Resolves the v4.3.1-deferred findings as moot (their target files are
deleted). npm test green: 509 tests, 507 pass, 0 fail, 2 skipped.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-12 14:05:07 +02:00

4.8 KiB

type brief_version created task slug project_dir research_topics research_status auto_research interview_turns source
trekbrief 2.0
YYYY-MM-DD
{one-line task description}
slug
.claude/projects/{YYYY-MM-DD}-{slug}/
N
pending false
N
interview | manual

Task: {title}

Generated by /trekbrief on {YYYY-MM-DD}. This brief is the contract between requirements and planning. /trekplan reads it to produce the implementation plan. Every decision in the plan must trace back to content in this brief.

Intent

Why are we doing this? What is the motivation, user need, or strategic context? 3-5 sentences. Load-bearing for the plan — every implementation decision must trace back to this intent.

{Intent paragraph. Answers "why bother?".}

Goal

What does success look like concretely? What state will the system be in when this is done? 1 paragraph. Specific enough to disagree with.

{Goal paragraph.}

Non-Goals

What is explicitly out of scope? Prevents plan-critic and scope-guardian from flagging gaps for things we deliberately do not do.

  • {non-goal 1}
  • {non-goal 2}

Constraints

Technical, time, or resource limitations. Hard boundaries the plan must respect.

  • {constraint 1}
  • {constraint 2}

Preferences

Preferred patterns, frameworks, libraries, or approaches. Soft constraints (the plan may deviate with justification).

  • {preference 1}
  • {preference 2}

Non-Functional Requirements

Performance, security, accessibility, scalability, or other quality attributes. Quantified where possible.

  • {NFR 1 — e.g., "p95 response time < 200ms"}
  • {NFR 2 — e.g., "Zero new npm dependencies"}

Success Criteria

Falsifiable, command-checkable conditions that define "done". Each must be verifiable by running a specific command or observing a specific system behavior.

  • {criterion — e.g., "All existing tests pass: npm test exits 0"}
  • {criterion — e.g., "New endpoint returns 200: curl -s localhost:3000/api/health | jq .status"ok""}
  • {criterion — e.g., "No TypeScript errors: npx tsc --noEmit exits 0"}

Do NOT write vague criteria:

  • "It should work" (not testable)
  • "The feature is implemented" (not falsifiable)
  • "Performance is acceptable" (no baseline given)

Research Plan

Explicit research topics that must be answered before /trekplan can produce a high-confidence plan. Each topic is phrased as a research question ready to feed into /trekresearch. Topics may be empty (N=0) for trivial tasks where the codebase alone is sufficient context.

{If research_topics = 0, write a single line: "No external research needed — the codebase and this brief contain sufficient context for planning."}

Topic 1: {Short title}

  • Why this matters: {How the plan depends on this answer. Which steps or decisions cannot be made confidently without it.}
  • Research question: "{Exact question to feed to /trekresearch. One sentence, ends in ?.}"
  • Suggested invocation: /trekresearch --project {project_dir} --external "{question}"
  • Required for plan steps: {which kinds of steps will consume this — e.g., "migration strategy", "library selection", "threat model"}
  • Confidence needed: {high | medium | low}
  • Estimated cost: {quick — inline research | standard — agent swarm | deep — with contrarian + gemini}
  • Scope hint: {local | external | both}

Topic 2: {Short title}

  • Why this matters: ...
  • Research question: "..."
  • Suggested invocation: /trekresearch --project {project_dir} ...
  • Required for plan steps: ...
  • Confidence needed: ...
  • Estimated cost: ...
  • Scope hint: ...

Open Questions / Assumptions

Things still uncertain after the interview. These are carried as [ASSUMPTION] entries into the plan and flagged to the user for review.

  • {question or assumption 1}
  • {question or assumption 2}

Prior Attempts

What has been tried before and what happened. Leave blank for fresh tasks. Prior attempts are load-bearing — they prevent the plan from repeating known failures.

{Prior attempts narrative, or "None — fresh task."}

Metadata

  • Created: {YYYY-MM-DD}
  • Interview turns: {N}
  • Auto-research opted in: {yes | no}
  • Source: {trekbrief interview | manual}

How to continue

Manual (default):

# Run each research topic (order does not matter):
/trekresearch --project {project_dir} --external "{Topic 1 question}"
/trekresearch --project {project_dir} --external "{Topic 2 question}"

# Then plan:
/trekplan --project {project_dir}

# Then execute:
/trekexecute --project {project_dir}

Auto (opt-in during /trekbrief): research and planning run automatically; only execution is manual.