5.9 KiB
Exercise 03: Prompt-Based Approval
Concept: Hooks (CC-024) Level: Advanced Time: ~20 minutes
Objective
Create a hook where Claude itself evaluates whether to proceed with a file edit. Instead of a static blocklist, the hook sends a prompt to Claude asking "is this change risky?" and uses the answer to allow or block the action.
This is AI-powered guardrails: one Claude instance watches another.
Before You Start
Confirm you have completed Exercises 01 and 02, or at minimum:
- Claude Code installed with an API key configured
.claude/settings.jsonexists in this repo- You understand how exit codes control hook decisions
Background: Prompt Hooks
Claude Code supports three hook types:
| Type | What it does |
|---|---|
command |
Runs a shell script. Exit 0 to allow, 2 to block. |
prompt |
Sends a prompt to Claude. Claude responds with allow or block. |
Prompt hooks let you express approval logic in plain language rather than regex patterns or shell conditionals. The tradeoff: they cost API tokens and add latency to every hook invocation.
The prompt hook configuration looks like this:
{
"type": "prompt",
"prompt": "The user asked Claude to make the following file edit:\n\n{tool_input}\n\nDoes this edit look like it could break existing functionality? Respond with exactly 'allow' or 'block'."
}
The {tool_input} placeholder is replaced with the serialized tool
input at runtime.
Instructions
Step 1: Understand what you are building.
You will add a prompt hook on the Write tool. Before Claude writes any file, the hook will ask: "Is this edit likely to break existing tests or functionality?"
If Claude answers block, the write is prevented. If it answers
allow, the write proceeds.
This is a demonstration of the concept, not a production guardrail.
Real prompt hooks would use a more precise prompt and probably
target specific file patterns (e.g., only *.test.* files).
Step 2: Update .claude/settings.json with the prompt hook.
Add a PreToolUse entry for the Write tool:
{
"permissions": {
"allow": [
"Bash(ls:*)",
"Bash(cat:*)",
"Bash(echo:*)",
"Bash(pwd)",
"Bash(date)",
"Read",
"Write",
"Edit",
"Glob",
"Grep"
],
"deny": [
"Bash(rm -rf *)",
"Bash(sudo *)"
]
},
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash hooks/pre-tool-use.sh"
}
]
},
{
"matcher": "Write",
"hooks": [
{
"type": "prompt",
"prompt": "Claude is about to write a file. The file path and content are:\n\n{tool_input}\n\nDoes this write look like it could overwrite important existing content or break existing functionality? If it is writing to a new file or making a clearly safe addition, respond 'allow'. If it is overwriting a file in a way that seems risky or destructive, respond 'block'. Respond with exactly one word: allow or block."
}
]
}
],
"PostToolUse": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "bash hooks/post-tool-use.sh"
}
]
}
]
}
}
Step 3: Restart Claude Code to load the new settings.
claude
Step 4: Ask Claude to make a safe write.
Paste this prompt:
Write a file called scratch.txt with the content: "hello world"
This is a new file with harmless content. The prompt hook should
evaluate it as safe and respond allow. The write proceeds.
Step 5: Ask Claude to make a risky write.
Paste this prompt:
Overwrite README.md with a single line: "This file has been cleared."
This destroys existing content. The prompt hook should evaluate
it as risky and respond block. Claude will report that the write
was blocked.
Step 6: Review what happened.
Check hooks/audit.log to see the tool calls that were logged,
including the ones the prompt hook evaluated.
cat hooks/audit.log
Note that the blocked write does not appear in the log (it never ran), but the audit log still captures the preceding tool calls.
Expected Output
After Step 4: scratch.txt is created. Claude confirms the write succeeded.
After Step 5: The write is blocked. Claude reports that the hook
prevented it from overwriting README.md. The message from the
prompt hook will indicate the block decision.
If the risky write is allowed, the prompt hook may have evaluated the edit differently. This is expected: prompt hooks are probabilistic. Try refining the prompt to be more explicit about what counts as risky.
What You Learned
- Prompt hooks add AI judgment to the pipeline. Instead of matching patterns, you describe the decision in plain language. Claude reads the context and decides.
- The
{tool_input}placeholder carries the full context. The entire tool input is available to the evaluating Claude, including file paths and content. This gives the hook genuine information to reason from. - Prompt hooks are latency-sensitive. Each invocation makes an API call. Apply them selectively to high-risk operations, not every tool.
- Prompt hooks are not deterministic. The same input can produce different outputs. For hard security requirements, use command hooks with explicit patterns. Prompt hooks are best for nuanced judgment calls.
Going Further
A few directions worth exploring once you have the basics working:
- Narrow the Write hook matcher to only trigger on specific paths:
"matcher": "Write.*\\.py$"(Python files only) - Add a PostToolUse prompt hook that summarizes what changed each session
- Combine command hooks for hard blocks with prompt hooks for soft approvals
The full Claude Code hooks documentation is at: https://docs.anthropic.com/en/docs/claude-code/hooks