ktg-plugin-marketplace/plugins/ultraplan-local/examples
Kjell Tore Guttormsen 44d7f339f5 feat(tally): wire regex counting path in main with invalid-regex exit-2
Step 2 of plan.md (Spor B B3 pipeline run). Wires the --regex/-r flag
into main(): when set, compileRegex(pattern) is used and the count is
text.match(re).length. Invalid regex exits 2 via the existing fail()
helper. JSON output now includes flags.regex so consumers can tell the
mode apart. Baseline tests remain green; -i/--ignore-case has no effect
when --regex is set (out of brief scope).

Verify covered: SC #1 (any position), SC #2 (-r short form), SC #3
(regex semantics differ), SC #4 (invalid exits 2), SC #5 (JSON regex),
SC #6 (byte-identical baseline).

[skip-docs]
2026-05-04 20:32:06 +02:00
..
01-add-verbose-flag feat(ultraplan-local): Spor 3 — semantic plan-critic, examples, CC features, security docs 2026-05-01 06:28:44 +02:00
02-real-cli feat(tally): wire regex counting path in main with invalid-regex exit-2 2026-05-04 20:32:06 +02:00
README.md feat(ultraplan-local): Spor 3 — semantic plan-critic, examples, CC features, security docs 2026-05-01 06:28:44 +02:00

Examples

Complete kalibrerte walk-throughs of the ultraplan-local pipeline for realistic tasks. Each example shows the four artifacts a project directory contains after a full run:

  • brief.md — task brief from /ultrabrief-local
  • research/*.md — research briefs from /ultraresearch-local
  • plan.md — implementation plan from /ultraplan-local
  • progress.json — execution log from /ultraexecute-local

These are hand-calibrated, not LLM-generated. The point is to give a fork-er a deterministic reference — what the artifacts look like when everything goes right, with a small but real task.

Running pipeline yourself

For your own work, point the four commands at a real project directory:

mkdir -p .claude/projects/2026-05-01-my-task
/ultrabrief-local
/ultraresearch-local --project .claude/projects/2026-05-01-my-task
/ultraplan-local --project .claude/projects/2026-05-01-my-task
/ultraexecute-local --project .claude/projects/2026-05-01-my-task

The artifacts in each example mirror that flow.

Examples

01-add-verbose-flag

Task: add a --verbose flag to a small CLI parser. Touches one parser file and six command handlers; adds two tests.

Why this example: small enough to read end-to-end in 10 minutes, but exercises every artifact (research with brief-anchoring, plan with manifests, progress.json with multi-step git history). Demonstrates how plan_version: 1.7 schema looks in real life — including the manifest YAML block per step and the must_contain list-of-dicts form.

What to study first:

  1. brief.md — note the explicit Out of scope section and concrete Success Criteria (no "make it work" hand-waving).
  2. plan.md Step 1 — note that the FIRST step captures golden output before any behavior change. This is the stability harness pattern.
  3. plan.md Step 5 — note that this step touches 5 files in one commit, and the plan justifies the deviation from the 12 file guideline. Plan-critic should accept that justification.
  4. progress.json — every step has both commit_sha and verify_passed. Resumes work from the last completed step.

Regeneration

Each example has a REGENERATED.md documenting the version it was calibrated against. When the artifact format changes, the example needs to be re-built. See the REGENERATED.md file in each example for triggers and procedure.

Adding a new example

If you have a small, realistic task (touches 1-3 files, has a clear success criterion, finishes in under 30 minutes) and want to add it as an example:

  1. Create examples/NN-slug-here/ with the same four artifacts.
  2. Add a REGENERATED.md documenting the calibration date and version.
  3. Add a section to this README under ## Examples.
  4. Open an issue on the marketplace describing what the example teaches that 01 doesn't already teach.