feat: initial companion repo for CC-001 Agent Loop
This commit is contained in:
commit
06828d22f9
5 changed files with 510 additions and 0 deletions
146
exercises/03-design-for-the-loop.md
Normal file
146
exercises/03-design-for-the-loop.md
Normal file
|
|
@ -0,0 +1,146 @@
|
|||
# Exercise 03: Design for the Loop
|
||||
|
||||
**Concept:** The Agent Loop (CC-001)
|
||||
**Level:** Advanced
|
||||
**Time:** ~20 minutes
|
||||
|
||||
---
|
||||
|
||||
## Objective
|
||||
|
||||
Design a multi-step task that takes advantage of the agent loop's
|
||||
strengths. You will write a prompt that requires Claude to gather
|
||||
information from multiple sources, make decisions based on what it
|
||||
finds, and produce a structured output.
|
||||
|
||||
This exercise has no step-by-step walkthrough. You get a scenario,
|
||||
constraints, and success criteria. How you get there is up to you.
|
||||
|
||||
---
|
||||
|
||||
## Before You Start
|
||||
|
||||
Confirm you have:
|
||||
|
||||
- [ ] Completed Exercises 01 and 02
|
||||
- [ ] Claude Code open in this directory
|
||||
- [ ] Internet access (Claude will use WebSearch)
|
||||
|
||||
---
|
||||
|
||||
## The Scenario
|
||||
|
||||
You want to evaluate whether a specific open-source tool is worth
|
||||
adopting for your workflow. Instead of spending an hour researching
|
||||
manually, you will write one prompt that makes Claude do the research
|
||||
for you.
|
||||
|
||||
## Your Task
|
||||
|
||||
Write a prompt that makes Claude:
|
||||
|
||||
1. **Search** for information about a tool you are curious about
|
||||
(pick any real tool: a CLI tool, a library, a framework)
|
||||
2. **Read** the tool's documentation or README
|
||||
3. **Compare** it against an alternative you already know
|
||||
4. **Write** a structured evaluation to a file called `evaluation.md`
|
||||
|
||||
The evaluation file must include:
|
||||
- Tool name and what it does (one sentence)
|
||||
- Three strengths
|
||||
- Three weaknesses or limitations
|
||||
- A direct comparison with the alternative
|
||||
- A verdict: adopt, wait, or skip
|
||||
|
||||
## Constraints
|
||||
|
||||
- Your prompt must be a single message (no back-and-forth)
|
||||
- Do not tell Claude which tools to use (let the loop decide)
|
||||
- The output must be a single file, not multiple files
|
||||
- Use Plan Mode first to preview, then execute
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] Claude made at least 5 tool calls to complete the task
|
||||
- [ ] The evaluation file exists and contains all required sections
|
||||
- [ ] The strengths and weaknesses are specific (not generic praise)
|
||||
- [ ] The comparison is direct, not just two separate descriptions
|
||||
- [ ] The verdict gives a clear recommendation with reasoning
|
||||
|
||||
---
|
||||
|
||||
## Hints (only if stuck)
|
||||
|
||||
<details>
|
||||
<summary>Hint 1: Prompt structure</summary>
|
||||
|
||||
A good prompt for the agent loop includes:
|
||||
- What to research (specific tool name)
|
||||
- What to compare against (specific alternative)
|
||||
- What output to produce (file name and structure)
|
||||
- What NOT to do (no installation, no code generation)
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Hint 2: Example prompt skeleton</summary>
|
||||
|
||||
```
|
||||
Research [tool X] and compare it to [tool Y] for [use case].
|
||||
|
||||
Search the web for recent reviews and the official docs.
|
||||
Write an evaluation to evaluation.md with:
|
||||
- One-sentence description of each tool
|
||||
- 3 specific strengths of [tool X]
|
||||
- 3 specific weaknesses of [tool X]
|
||||
- Direct comparison table: [tool X] vs [tool Y]
|
||||
- Verdict: adopt, wait, or skip, with reasoning
|
||||
|
||||
Do not install anything. Do not write code. Research only.
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Reflection
|
||||
|
||||
After completing this exercise, consider:
|
||||
|
||||
- How many tool calls did Claude make? (Check the terminal output)
|
||||
- Did the loop take any detours? Were they useful or wasteful?
|
||||
- What would you change in your prompt to get a better result?
|
||||
- How long would this research have taken you manually?
|
||||
|
||||
The agent loop is most powerful when the task requires gathering
|
||||
and synthesizing information from multiple sources. A single prompt
|
||||
replaces what used to be 30 minutes of tab-switching and note-taking.
|
||||
|
||||
---
|
||||
|
||||
## What You Learned
|
||||
|
||||
- **Single-prompt, multi-step tasks** are where the agent loop shines
|
||||
- **Output structure in the prompt** gives the loop a clear target
|
||||
- **Plan Mode first** lets you verify the approach before committing
|
||||
- **The loop handles the how:** You specify the what and the where
|
||||
|
||||
---
|
||||
|
||||
## Clean Up
|
||||
|
||||
```bash
|
||||
rm evaluation.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Comes Next
|
||||
|
||||
You now understand the agent loop: what it is (Exercise 01), how to
|
||||
control it (Exercise 02), and how to design tasks for it (Exercise 03).
|
||||
|
||||
Next concepts to explore:
|
||||
- **CC-002 Built-in Tools** - The specific tools the loop uses
|
||||
- **CC-010 CLAUDE.md** - How to give the loop standing instructions
|
||||
- **CC-006 Permissions** - How to control what the loop is allowed to do
|
||||
Loading…
Add table
Add a link
Reference in a new issue