# Feedback Log: {{PROJECT_NAME}} > Append-only. One row per pipeline run. Reviewed by performance-scorer.sh. ## Feedback Table | Date | Pipeline | Agent | Score | Issue | Resolution | Pattern | |------|----------|-------|-------|-------|------------|---------| | {{DATE}} | {{PIPELINE_NAME}} | {{AGENT_NAME}} | {{SCORE}}/100 | {{ISSUE_DESCRIPTION}} | {{RESOLUTION}} | {{PATTERN_TAG}} | ## Pattern Tags Use consistent tags so performance-scorer.sh can detect recurring issues: - `quality-low` — output below acceptance threshold - `loop-excess` — more revision iterations than expected - `timeout` — agent exceeded time budget - `tool-fail` — tool call failed or returned unexpected result - `cost-spike` — single run cost exceeded 3x average - `scope-drift` — agent worked outside defined scope - `hallucination` — output contained factual errors ## Notes Scores are 0–100 as assigned by the reviewer agent or human reviewer. A score below 60 triggers a flag in performance-scorer.sh. Three or more rows with the same Pattern tag = recurring issue. Recurring issues should drive prompt iteration or pipeline redesign.