Adds the runnable counterpart to examples/01-add-verbose-flag (which is artifacts-only). The fixture is the measurement target for Spor B's end-to-end pipeline run (B3) and Spor C's cache-prefix experiment. Baseline: - tally.mjs (80 lines, hand-rolled argv parser, zero deps) - 3 flags: --json, -i/--ignore-case, --lines + --help - Exit codes: 0 success, 1 file error, 2 invalid argv - 10 node:test cases, all green (~2.2s wall-clock) - Deterministic fixtures: sample.txt (foo×7, Foo×1, regex fo+×9) + poem.txt (--lines vs total distinction) - REGENERATED.md skeleton (B3 fills the pipeline walk-through) Brief preconditions verified: - grep -c 'foo' sample.txt = 4 (>= 1) - regex /fo+/g count = 9 (> grep count) - Brief assumptions for B3 SC #1, #3 hold This is the first runnable example in plugins/ultraplan-local/examples/. Next: B3 runs /ultraresearch-local + /ultraplan-local + /ultraexecute-local against the brief to add --regex/-r, then verifies all 10 Success Criteria. |
||
|---|---|---|
| .. | ||
| 01-add-verbose-flag | ||
| 02-real-cli | ||
| README.md | ||
Examples
Complete kalibrerte walk-throughs of the ultraplan-local pipeline for realistic tasks. Each example shows the four artifacts a project directory contains after a full run:
brief.md— task brief from/ultrabrief-localresearch/*.md— research briefs from/ultraresearch-localplan.md— implementation plan from/ultraplan-localprogress.json— execution log from/ultraexecute-local
These are hand-calibrated, not LLM-generated. The point is to give a fork-er a deterministic reference — what the artifacts look like when everything goes right, with a small but real task.
Running pipeline yourself
For your own work, point the four commands at a real project directory:
mkdir -p .claude/projects/2026-05-01-my-task
/ultrabrief-local
/ultraresearch-local --project .claude/projects/2026-05-01-my-task
/ultraplan-local --project .claude/projects/2026-05-01-my-task
/ultraexecute-local --project .claude/projects/2026-05-01-my-task
The artifacts in each example mirror that flow.
Examples
01-add-verbose-flag
Task: add a --verbose flag to a small CLI parser. Touches one
parser file and six command handlers; adds two tests.
Why this example: small enough to read end-to-end in 10 minutes,
but exercises every artifact (research with brief-anchoring, plan with
manifests, progress.json with multi-step git history). Demonstrates
how plan_version: 1.7 schema looks in real life — including the
manifest YAML block per step and the must_contain list-of-dicts
form.
What to study first:
brief.md— note the explicitOut of scopesection and concreteSuccess Criteria(no "make it work" hand-waving).plan.mdStep 1 — note that the FIRST step captures golden output before any behavior change. This is the stability harness pattern.plan.mdStep 5 — note that this step touches 5 files in one commit, and the plan justifies the deviation from the 1–2 file guideline. Plan-critic should accept that justification.progress.json— every step has bothcommit_shaandverify_passed. Resumes work from the last completed step.
Regeneration
Each example has a REGENERATED.md documenting the version it was
calibrated against. When the artifact format changes, the example
needs to be re-built. See the REGENERATED.md file in each example
for triggers and procedure.
Adding a new example
If you have a small, realistic task (touches 1-3 files, has a clear success criterion, finishes in under 30 minutes) and want to add it as an example:
- Create
examples/NN-slug-here/with the same four artifacts. - Add a
REGENERATED.mddocumenting the calibration date and version. - Add a section to this README under
## Examples. - Open an issue on the marketplace describing what the example teaches that 01 doesn't already teach.