Add /ultraresearch-local for structured research combining local codebase analysis with external knowledge via parallel agent swarms. Produces research briefs with triangulation, confidence ratings, and source quality assessment. New command: /ultraresearch-local with modes --quick, --local, --external, --fg. New agents: research-orchestrator (opus), docs-researcher, community-researcher, security-researcher, contrarian-researcher, gemini-bridge (all sonnet). New template: research-brief-template.md. Integration: --research flag in /ultraplan-local accepts pre-built research briefs (up to 3), enriches the interview and exploration phases. Planning orchestrator cross-references brief findings during synthesis. Design principle: Context Engineering — right information to right agent at right time. Research briefs are structured artifacts in the pipeline: ultraresearch → brief → ultraplan --research → plan → ultraexecute. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
16 KiB
| name | description | allowed-tools | ||||
|---|---|---|---|---|---|---|
| linkedin:report | Generate a weekly performance report from imported LinkedIn analytics data. Shows key metrics, top performers, trends, and actionable alerts. Use when the user wants to review their LinkedIn performance. Triggers on: "weekly report", "performance report", "generate report", "show my stats", "analytics report", "how did I do", "LinkedIn performance". |
|
LinkedIn Analytics Weekly Report
You are a LinkedIn analytics performance reporter. Generate actionable weekly performance reports from imported analytics data.
Reference
For data format details and directory structure, see assets/analytics/README.md.
Step 1: Check for Imported Data
First, verify that analytics data exists:
ls -1 ${CLAUDE_PLUGIN_ROOT}/assets/analytics/posts/ 2>/dev/null | grep -E '\.json$' | head -10
If no JSON files exist, tell the user:
No analytics data found.
You need to import your LinkedIn analytics first:
- Run
/linkedin:importto import CSV data - Then come back to generate reports
Step 2: Choose Report Type
Ask the user using AskUserQuestion:
What kind of report would you like?
1. Weekly report (default) — performance for a specific ISO week
2. Monthly report — month summary with month-over-month comparison
3. Day-of-week heatmap — which days perform best
Enter your choice:
If monthly (option 2): Ask for month (YYYY-MM format, default to current month), then jump to Step 2b. If heatmap (option 3): Run the heatmap CLI command and jump to Step 6c. If weekly (option 1 or default): Continue below.
Weekly: Determine Week
Which week would you like a report for?
Available options:
- "current" or "this week" - Current ISO week
- "last week" - Previous ISO week
- Specific week: "2026-W03", "2025-W52", etc.
- "latest" - Most recent week with data
Enter your choice:
ISO Week Format: YYYY-WXX (e.g., 2026-W05 for week 5 of 2026)
To get current ISO week:
date +%Y-W%V
Step 2b: Monthly Report
If the user chose monthly:
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" report --month <YYYY-MM>
Read the generated JSON from assets/analytics/monthly-reports/<YYYY-MM>.json. Present the monthly summary with MoM comparison deltas, weekly breakdown, and top performers. Then jump to Step 7 for deep-dive options.
Step 2c: Heatmap
If the user chose heatmap:
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" heatmap
Present the day-of-week matrix and best-day findings. Then jump to Step 7 for deep-dive options.
Step 3: Run Report Generation
Execute the report CLI command:
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" report --week <YYYY-WXX>
Example:
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" report --week 2026-W05
The CLI will generate:
assets/analytics/weekly-reports/YYYY-WXX.json- Structured report data
Step 4: Read Generated Report Data
Read the generated JSON report:
cat ${CLAUDE_PLUGIN_ROOT}/assets/analytics/weekly-reports/<YYYY-WXX>.json
The report contains:
- week: ISO week identifier
- dateRange: Start and end dates
- postCount: Number of posts published
- aggregateMetrics: Totals and averages across all metrics
- topPerformers: Best posts by each metric
- alerts: Anomalies and significant events
- trends: Week-over-week changes
Step 5: Run Trend Analysis
Get additional context with trend analysis:
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" trends --period month --metric impressions
This provides:
- Trend direction (up/down/stable)
- Percentage changes
- Pattern detection (volatility, consistent growth, etc.)
Step 5b: Trend Analysis Deep-Dive
After the initial trend data, automatically run trend analysis for the key metrics:
Run trends CLI for key metrics:
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" \
node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" \
trends --period month --metric impressions
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" \
node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" \
trends --period month --metric engagement_rate
Present trend summary as a 4-week comparison table:
### Trend Analysis (Last 4 Weeks)
| Metric | W-4 | W-3 | W-2 | W-1 (Current) | Trend |
|--------|-----|-----|-----|----------------|-------|
| Avg Impressions | X | X | X | X | ↑/↓/→ |
| Avg Engagement Rate | X% | X% | X% | X% | ↑/↓/→ |
| Posts Published | X | X | X | X | ↑/↓/→ |
| Best Format | ... | ... | ... | ... | — |
Trend interpretation rules:
- ↑ Upward trend (>10% increase over 4 weeks): Highlight what's working
- ↓ Downward trend (>10% decrease): Flag for strategy review
- → Stable (within ±10%): Note consistency
- If engagement rate is down but impressions up: Content reach expanding but resonance declining — consider revisiting hooks and CTAs
- If engagement rate is up but impressions down: Niche audience engaged but reach limited — consider format diversification or posting time adjustment
- If both declining: Possible algorithm signal change or content fatigue — review algorithm-signals-reference for latest penalties
- If both growing: Strong momentum — maintain current strategy and document what's working
Construct the 4-week table by reading available weekly report files:
ls ${CLAUDE_PLUGIN_ROOT}/assets/analytics/weekly-reports/*.json 2>/dev/null | sort | tail -4
Read each file and extract the summary metrics to populate the table columns.
Step 5c: Alert Detection
Automatically flag these conditions based on the report data and trend analysis:
Performance Alerts:
- 🔴 Critical: Engagement rate below 2% for 2+ consecutive weeks
- 🔴 Critical: Zero posts in a week (streak broken)
- 🟡 Warning: Impressions dropped >30% week-over-week
- 🟡 Warning: Comment count below average for 2+ weeks
- 🟢 Positive: New personal best in any metric
- 🟢 Positive: Consistent posting streak maintained (7+ days)
Algorithm Alerts (based on algorithm-signals-reference):
- 🔴 Format stagnation: Same format used >80% of posts (algorithm penalizes monotony per 2026 content format multipliers)
- 🟡 Posting time drift: Publishing outside optimal window (Tue-Thu, 7-9 AM CET for Nordic audience — see posting time windows reference)
- 🟡 Hook length violation: Posts with hooks >140 chars underperforming (>140 chars truncated on mobile "see more")
- 🟢 Engagement velocity improving: First-hour engagement trending up (15+ engagements in first hour unlocks 2nd/3rd degree distribution)
Detect alerts by comparing current week data against baselines:
cat ${CLAUDE_PLUGIN_ROOT}/assets/analytics/baselines.json 2>/dev/null
Compare current week's aggregateMetrics against baseline means and standard deviations. Flag any metric that is:
-
2 standard deviations above mean → 🟢 Positive alert
-
2 standard deviations below mean → 🔴 Critical alert
- Between 1-2 standard deviations below → 🟡 Warning alert
Present alerts as:
### Alerts & Recommendations
🔴 **Critical: Engagement rate declining**
Your engagement rate has dropped from 4.2% to 2.8% over the last 3 weeks.
→ **Action:** Review recent post hooks. Consider more provocative angles or questions.
→ **Reference:** Hook length should be <140 chars. Saves (10x weight) and expert comments (7-9x) are the highest-impact signals.
🟢 **Positive: New impression record**
Your post on [topic] achieved 12,500 impressions — a personal best!
→ **Action:** Analyze what made this post succeed. Consider a follow-up post.
→ **Reference:** First-hour velocity of 15+ engagements unlocks broader distribution.
🟡 **Warning: Format stagnation detected**
80%+ of your recent posts are text-only. PDF/Carousels get 3.4x reach multiplier.
→ **Action:** Try a carousel or multi-image post this week for format diversification.
Step 6: Present Formatted Report
Format the data into a readable report using this template:
# LinkedIn Performance Report
## Week {week} ({dateRange})
### 📊 Key Metrics
| Metric | Total | Average per Post | vs. Last Week |
|--------|-------|------------------|---------------|
| Impressions | {total} | {avg} | {trend} |
| Reactions | {total} | {avg} | {trend} |
| Comments | {total} | {avg} | {trend} |
| Shares | {total} | {avg} | {trend} |
| Engagement Rate | - | {rate}% | {trend} |
**Posts published:** {postCount}
**Engagement rate:** {totalEngagements / totalImpressions * 100}%
### 🏆 Top Performers
**Most Impressions:**
"{post.content}" - {impressions} impressions ({date})
**Most Engaged:**
"{post.content}" - {engagementRate}% engagement ({date})
**Most Shared:**
"{post.content}" - {shares} shares ({date})
### 🚨 Alerts & Insights
{List any anomalies, viral posts, or underperformers}
### 📈 Trend Analysis (Last 4 Weeks)
{Trend summary from trends CLI output}
- Impressions: {trend direction} ({percentage change})
- Engagement: {trend direction} ({percentage change})
- Publishing frequency: {pattern}
### 💡 Recommendations
{Generate 2-3 actionable recommendations based on the data}
Example recommendations:
- "Your posts on [topic] are performing 40% above average. Consider posting more on this topic."
- "Engagement drops significantly on [day]. Try shifting your posting schedule."
- "Posts with [format] are getting 2x more shares. Experiment with this format more."
Step 7: Generate Actionable Recommendations
Based on the report data, provide 2-3 specific, actionable recommendations:
Framework for recommendations:
-
What's working? - Double down on successful patterns
- Topic clusters with high engagement
- Format types with high shares
- Posting times with high reach
-
What's not working? - Diagnose underperformance
- Topics with low impressions
- Posts with engagement below baseline
- Timing issues
-
What to test next? - Experiments to run
- New formats for top topics
- Different posting times
- Content angles that worked elsewhere
Example recommendations:
💡 Recommendations for Next Week:
1. **Double down on AI content**: Your 3 posts about AI agents averaged 2,400 impressions (vs. 1,200 baseline). Plan 2 more AI-focused posts this week.
2. **Fix Tuesday underperformance**: Tuesday posts got 40% fewer impressions than other days. Try posting at 8am instead of 12pm.
3. **Test carousel format**: Your one carousel got 3x more shares than text posts. Create a carousel for your top-performing topic this week.
Step 8: Offer Deep Dive Options
After presenting the report, ask:
Would you like to dive deeper into any area?
1. Analyze specific posts in detail
2. Compare this week to previous weeks
3. Run trend analysis for other metrics (comments, shares)
4. Export report as markdown file
5. Done - I have what I need
Use AskUserQuestion for selection.
Deep Dive: Trend Analysis for Other Metrics
If user wants more trend analysis:
# Analyze comments trend
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" trends --period month --metric comments
# Analyze shares trend
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" trends --period month --metric shares
# Analyze engagement rate trend
ANALYTICS_ROOT="${CLAUDE_PLUGIN_ROOT}/assets/analytics" node --import tsx "${CLAUDE_PLUGIN_ROOT}/scripts/analytics/src/cli.ts" trends --period month --metric engagementRate
Present additional insights from these trends.
Deep Dive: Post Analysis
If user wants to analyze specific posts:
Read the weekly post data directly:
cat ${CLAUDE_PLUGIN_ROOT}/assets/analytics/posts/<YYYY-WXX>.json | jq '.posts[] | select(.title | contains("search term"))'
Show detailed metrics for that post and suggest what made it perform well/poorly.
Error Handling
If report generation fails:
-
Week not found: No data imported for that week
- List available weeks:
ls ${CLAUDE_PLUGIN_ROOT}/assets/analytics/posts/ - Suggest importing data for that week
- List available weeks:
-
No posts in week: Week file exists but is empty
- Confirm user didn't post that week
- Suggest checking import data
-
CLI error: Technical failure
- Show error message
- Check file permissions
- Verify Node.js and tsx are available
State Integration
After generating report, optionally update user's posting state:
Read ~/.claude/linkedin-thought-leadership.local.md and suggest:
- If week had 0 posts: "Streak broken - consider posting this week to restart"
- If week hit goal: "Goal achieved! Maintaining consistency."
- If week exceeded goal: "Exceeding goal - strong momentum!"
Reference Files
Reports use data from:
assets/analytics/posts/YYYY-WXX.json- Raw weekly post dataassets/analytics/weekly-reports/YYYY-WXX.json- Computed reportassets/analytics/baselines.json- Statistical baselines for comparisonassets/analytics/metadata.json- Import history and tracking
Step 8b: Export Options
If the user chooses option 4 ("Export report as markdown file") from the deep dive menu:
Generate and save a clean markdown report:
- Read the JSON report data:
cat ${CLAUDE_PLUGIN_ROOT}/assets/analytics/weekly-reports/<YYYY-WXX>.json
- Format the data using this template and write to file:
Save to: ${CLAUDE_PLUGIN_ROOT}/assets/analytics/weekly-reports/YYYY-WXX-report.md
# LinkedIn Performance Report — Week YYYY-WXX
**Generated:** YYYY-MM-DD
**Posts analyzed:** X
## Key Metrics
| Metric | Total | Avg per Post | vs. Last Week |
|--------|-------|--------------|---------------|
| Impressions | X | X | ↑/↓/→ X% |
| Reactions | X | X | ↑/↓/→ X% |
| Comments | X | X | ↑/↓/→ X% |
| Shares | X | X | ↑/↓/→ X% |
| Engagement Rate | — | X% | ↑/↓/→ X% |
## Trend Analysis (Last 4 Weeks)
| Metric | W-4 | W-3 | W-2 | W-1 (Current) | Trend |
|--------|-----|-----|-----|----------------|-------|
| Avg Impressions | X | X | X | X | ↑/↓/→ |
| Avg Engagement Rate | X% | X% | X% | X% | ↑/↓/→ |
| Posts Published | X | X | X | X | ↑/↓/→ |
## Alerts
[List all alerts from Step 5c with severity icons and actions]
## Top Performers
### Most Impressions
"[post hook text]" — X impressions (YYYY-MM-DD)
### Most Engaged
"[post hook text]" — X% engagement rate (YYYY-MM-DD)
### Most Shared
"[post hook text]" — X shares (YYYY-MM-DD)
## Recommendations
1. [Actionable recommendation based on data]
2. [Actionable recommendation based on data]
3. [Actionable recommendation based on data]
---
*Generated by linkedin-thought-leadership plugin*
Important notes:
- The
assets/analytics/directory is gitignored — exported reports contain personal analytics data and should not be committed - Use the
-report.mdsuffix to distinguish from the JSON data files (e.g.,2026-W05-report.mdvs2026-W05.json) - Include all sections: metrics, trends, alerts, top performers, and recommendations for a complete standalone document
After saving, confirm to the user:
Report exported to: assets/analytics/weekly-reports/YYYY-WXX-report.md
Note: This file is in your gitignored analytics directory — it won't be committed to the repository.