ktg-plugin-marketplace/plugins/ultraplan-local/examples
Kjell Tore Guttormsen c8146c143d feat(ultraplan-local): tally CLI baseline fixture for examples/02-real-cli (Spor B B2) [skip-docs]
Adds the runnable counterpart to examples/01-add-verbose-flag (which is
artifacts-only). The fixture is the measurement target for Spor B's
end-to-end pipeline run (B3) and Spor C's cache-prefix experiment.

Baseline:
- tally.mjs (80 lines, hand-rolled argv parser, zero deps)
- 3 flags: --json, -i/--ignore-case, --lines + --help
- Exit codes: 0 success, 1 file error, 2 invalid argv
- 10 node:test cases, all green (~2.2s wall-clock)
- Deterministic fixtures: sample.txt (foo×7, Foo×1, regex fo+×9) +
  poem.txt (--lines vs total distinction)
- REGENERATED.md skeleton (B3 fills the pipeline walk-through)

Brief preconditions verified:
- grep -c 'foo' sample.txt = 4 (>= 1)
- regex /fo+/g count = 9 (> grep count)
- Brief assumptions for B3 SC #1, #3 hold

This is the first runnable example in plugins/ultraplan-local/examples/.
Next: B3 runs /ultraresearch-local + /ultraplan-local + /ultraexecute-local
against the brief to add --regex/-r, then verifies all 10 Success Criteria.
2026-05-04 20:18:57 +02:00
..
01-add-verbose-flag feat(ultraplan-local): Spor 3 — semantic plan-critic, examples, CC features, security docs 2026-05-01 06:28:44 +02:00
02-real-cli feat(ultraplan-local): tally CLI baseline fixture for examples/02-real-cli (Spor B B2) [skip-docs] 2026-05-04 20:18:57 +02:00
README.md feat(ultraplan-local): Spor 3 — semantic plan-critic, examples, CC features, security docs 2026-05-01 06:28:44 +02:00

Examples

Complete kalibrerte walk-throughs of the ultraplan-local pipeline for realistic tasks. Each example shows the four artifacts a project directory contains after a full run:

  • brief.md — task brief from /ultrabrief-local
  • research/*.md — research briefs from /ultraresearch-local
  • plan.md — implementation plan from /ultraplan-local
  • progress.json — execution log from /ultraexecute-local

These are hand-calibrated, not LLM-generated. The point is to give a fork-er a deterministic reference — what the artifacts look like when everything goes right, with a small but real task.

Running pipeline yourself

For your own work, point the four commands at a real project directory:

mkdir -p .claude/projects/2026-05-01-my-task
/ultrabrief-local
/ultraresearch-local --project .claude/projects/2026-05-01-my-task
/ultraplan-local --project .claude/projects/2026-05-01-my-task
/ultraexecute-local --project .claude/projects/2026-05-01-my-task

The artifacts in each example mirror that flow.

Examples

01-add-verbose-flag

Task: add a --verbose flag to a small CLI parser. Touches one parser file and six command handlers; adds two tests.

Why this example: small enough to read end-to-end in 10 minutes, but exercises every artifact (research with brief-anchoring, plan with manifests, progress.json with multi-step git history). Demonstrates how plan_version: 1.7 schema looks in real life — including the manifest YAML block per step and the must_contain list-of-dicts form.

What to study first:

  1. brief.md — note the explicit Out of scope section and concrete Success Criteria (no "make it work" hand-waving).
  2. plan.md Step 1 — note that the FIRST step captures golden output before any behavior change. This is the stability harness pattern.
  3. plan.md Step 5 — note that this step touches 5 files in one commit, and the plan justifies the deviation from the 12 file guideline. Plan-critic should accept that justification.
  4. progress.json — every step has both commit_sha and verify_passed. Resumes work from the last completed step.

Regeneration

Each example has a REGENERATED.md documenting the version it was calibrated against. When the artifact format changes, the example needs to be re-built. See the REGENERATED.md file in each example for triggers and procedure.

Adding a new example

If you have a small, realistic task (touches 1-3 files, has a clear success criterion, finishes in under 30 minutes) and want to add it as an example:

  1. Create examples/NN-slug-here/ with the same four artifacts.
  2. Add a REGENERATED.md documenting the calibration date and version.
  3. Add a section to this README under ## Examples.
  4. Open an issue on the marketplace describing what the example teaches that 01 doesn't already teach.