1
0
Fork 0

fix: pedagogical review - add expected output, CLAUDE.md, fix consistency

Address findings from pedagogical review simulating a non-expert user:

- Add CLAUDE.md to project root (was referenced but missing)
- Fix README score from 12/9/1 to 13/8/1 (match feature-map.md)
- Add Expected Output sections to examples 01, 02, 05, 09, 10
- Create pipeline-output/ and briefings/ directories
- Add example ordering guidance in README
- Add plan requirements for examples 11/13 in prerequisites
- Add skill frontmatter explanation in GETTING-STARTED.md
- Explain Cowork/Dispatch with links in cowork-integration
- Expand .gitignore with node_modules and generated output files
- Add model override hints in agent frontmatter comments

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Kjell Tore Guttormsen 2026-03-26 20:25:45 +01:00
commit 06ae605051
15 changed files with 237 additions and 35 deletions

View file

@ -39,6 +39,35 @@ confirmation between steps unless it hits something ambiguous.
---
## Expected Output
After 30-60 seconds, you should see a new file `research-output.md` in the
project root. It will look something like this (content varies by month):
```markdown
# AI Frameworks Released This Month
## 1. ExampleFramework
- **Released:** March 12, 2026
- **GitHub:** https://github.com/example/framework (4,200 stars)
- **What it solves:** Simplifies multi-agent orchestration for Python developers.
- **Verdict:** Worth watching. Growing fast with strong community momentum.
## 2. ...
## Verdict
1. ExampleFramework - most practical for production use
2. ...
```
**How you know it worked:**
- A file called `research-output.md` exists in the project root
- It contains 3 frameworks with star counts and URLs
- It ends with a ranked Verdict section
- You saw WebSearch and WebFetch tool calls streaming in the terminal
---
## Why This Matters
This is the agent loop in action: plan, execute, observe, repeat. The same