Meta-awareness tools for healthy AI interaction patterns. Detects reinforcement loops, scope escalation, and compulsive patterns. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
7.8 KiB
| name | description | argument-hint | allowed-tools | |||
|---|---|---|---|---|---|---|
| interaction-report | Interaction pattern report from Layer 2 session data | [weekly|monthly|all] |
|
Interaction Awareness Report
You are generating an interaction awareness report from JSONL session data.
Step 1 — Layer guard
Read the file .claude/ai-psychosis.local.md in the current working
directory. If the file does not exist, or if its YAML frontmatter does not
contain layer3: true, stop and output:
Layer 3 (reports) is not enabled for this project.
To enable, create `.claude/ai-psychosis.local.md`:
---
layer2: true
layer3: true
layer4: false
---
Then restart Claude Code.
Do not continue past this step if Layer 3 is not enabled.
Also note the value of layer4 (true or false) — you will need it in Step 9.
Step 2 — Parse arguments
The time period is determined by $ARGUMENTS:
| Argument | Period | Cutoff |
|---|---|---|
| (empty) | Last 7 days | Today minus 7 days |
weekly |
Last 7 days | Today minus 7 days |
monthly |
Last 30 days | Today minus 30 days |
all |
All data | No cutoff |
If $ARGUMENTS is anything else, output:
Usage: /interaction-report [weekly|monthly|all]
weekly Last 7 days (default)
monthly Last 30 days
all All recorded data
Step 3 — Locate data files
Run via Bash: echo $CLAUDE_PLUGIN_DATA
If the result is empty, use the fallback path ~/.claude/plugins/data/ai-psychosis.
Check that both files exist:
{data_dir}/sessions.jsonl{data_dir}/events.jsonl
If neither file exists, output:
No interaction data found.
Layer 2 (programmatic detection) collects data during active sessions.
Ensure Layer 2 is enabled and use Claude Code normally — data accumulates
automatically. Then run /interaction-report again.
If only events.jsonl is missing, proceed with sessions data only and note
"Tool usage data not available" in the report.
Step 4 — Read data
Size check
Run via Bash: wc -l {data_dir}/sessions.jsonl {data_dir}/events.jsonl 2>/dev/null || true
If a file does not exist, skip it and treat its line count as 0.
Read sessions.jsonl
If the file has fewer than 1000 lines, read the entire file.
If larger, read the last 1000 lines (via Bash: tail -n 1000 {data_dir}/sessions.jsonl).
Read events.jsonl
If the file has fewer than 5000 lines, read the entire file.
If larger and period is weekly: read the last 5000 lines.
If larger and period is monthly or all: read the last 10000 lines and note
"Events data sampled (last N entries)" in the report.
Step 5 — Parse and filter records
sessions.jsonl record types
The file contains two record types interleaved:
Start records — have hour and is_late_night, but NO end or duration_min:
{"session_id":"abc","start":"2026-04-05T10:00:00Z","hour":10,"is_late_night":false}
End records — have end, duration_min, tool_count, edit_count, flags:
{"session_id":"abc","start":"2026-04-05T10:00:00Z","end":"2026-04-05T11:35:00Z","duration_min":95,"tool_count":47,"edit_count":12,"flags":{"dependency":2,"escalation":0,"fatigue":1,"validation":1}}
Error records — have note: "no_state_file". Ignore these.
Filtering
For the selected time period, filter records where the start field is
greater than or equal to the cutoff date string (ISO timestamps sort
lexicographically — string comparison works correctly).
Separate start records from end records:
- End records (have
duration_min): use for duration, tools, flags - Start records (have
is_late_night): use for late-night count
events.jsonl
Filter events where ts >= cutoff date string. Group by tool_name and count.
Step 6 — Compute statistics
From end records:
- Total sessions (count of end records in period)
- Average session duration (
sum(duration_min) / count) - Total tool calls (
sum(tool_count)) - Average edit ratio (
sum(edit_count) / sum(tool_count) * 100, as percentage) - Flag totals:
sum(flags.dependency),sum(flags.escalation),sum(flags.fatigue),sum(flags.validation) - Average flags per session for each category
From start records:
- Late-night sessions: count where
is_late_nightis true
From events.jsonl:
- Tool usage: group by
tool_name, count occurrences, sort descending - Show top 10 tools
Trend comparison (weekly and monthly only):
- Compute the same metrics for the PREVIOUS period of equal length
- Calculate the delta (current minus previous)
If previous period has zero sessions, skip the trend section.
Sessions without matching end records are incomplete — count them separately as "incomplete sessions" and exclude from duration/flag averages.
Step 7 — Format report
Output the report as markdown. Use this exact structure:
## Interaction Awareness Report
**Period:** {start_date} to {end_date} ({N} days)
**Sessions:** {N} completed ({N} incomplete)
**Data source:** {path}
### Overview
| Metric | Value |
|--------|-------|
| **Sessions** | {N} |
| **Avg duration** | {N} min |
| **Total tool calls** | {N} |
| **Avg edit ratio** | {N}% |
| **Late-night sessions** | {N} |
### Pattern Flags
| Pattern | Total | Per session |
|---------|-------|-------------|
| Dependency language | {N} | {avg} |
| Escalation language | {N} | {avg} |
| Fatigue signals | {N} | {avg} |
| Validation-seeking | {N} | {avg} |
### Tool Usage (top 10)
| Tool | Count | % |
|------|-------|---|
| {name} | {N} | {pct}% |
### Daily Activity
| Date | Sessions | Total duration | Flags |
|------|----------|----------------|-------|
| {date} | {N} | {N} min | {summary} |
### Trend vs previous {period}
| Metric | Previous | Current | Delta |
|--------|----------|---------|-------|
| Sessions | {N} | {N} | {+/-N} |
| Avg duration | {N} min | {N} min | {+/-N} |
| Flags (total) | {N} | {N} | {+/-N} |
### Observations
- {data-driven observation}
- {data-driven observation}
Step 8 — Tone and privacy rules
MANDATORY:
- Neutral, observational tone. You are presenting data, not making judgments.
- Never use words like "concerning", "worrying", "problematic", or "unhealthy".
- Never use emoji.
- Never speculate about what the user was doing or thinking.
- Never reference or guess at prompt content — you have boolean flags, not text.
- This is a mirror, not a diagnosis. Present the numbers and let the user interpret them.
- Observations section: state facts derived from data only. Examples:
- "3 of 12 sessions were between 23:00 and 05:00"
- "Dependency language flags appeared in 7 of 12 sessions"
- "Edit ratio averaged 8%, below the 10% threshold in 5 sessions"
- If all metrics are within normal ranges, say so plainly: "All metrics within normal ranges for the reporting period."
- Omit any section that has no data (e.g., skip Trend if no previous period, skip Tool Usage if events.jsonl was missing).
Step 9 — Contemplative reference (conditional)
This step applies ONLY when BOTH conditions are met:
layer4: truewas noted in Step 1- Total flags (dependency + escalation + fatigue + validation) >= 5, OR fatigue flags >= 2
If both conditions are met, append this exact paragraph to the report. Do not modify, paraphrase, abbreviate, or add commentary to this text:
### A note from the plugin author
The patterns above are structural — they emerge from the interaction itself,
not from individual weakness. If you find yourself wanting to understand the
mechanics of your own mind more deeply, the
[Miracle of Mind](https://isha.sadhguru.org/global/en/miracle-of-mind)
program by Sadhguru offers a structured approach. This is what works for me.
It is not a recommendation — just a pointer.
If either condition is not met, omit this section entirely. Do not mention Layer 4, do not explain why the section was omitted.