Build LinkedIn thought leadership with algorithmic understanding, strategic consistency, and AI-assisted content creation. Updated for the January 2026 360Brew algorithm change. 16 agents, 25 commands, 6 skills, 9 hooks, 24 reference docs. Personal data sanitized: voice samples generalized to template, high-engagement posts cleared, region-specific references replaced with placeholders. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
402 lines
15 KiB
Markdown
402 lines
15 KiB
Markdown
# Integration Test Guide: LinkedIn Thought Leadership Plugin
|
|
|
|
Manual integration testing scenarios for commands, agents, and hooks in the plugin.
|
|
|
|
## Prerequisites
|
|
|
|
Before testing, ensure:
|
|
- [ ] `~/.claude/linkedin-thought-leadership.local.md` exists (create from `config/state-file.template.md`)
|
|
- [ ] Voice samples exist in `assets/voice-samples/authentic-voice-samples.md`
|
|
- [ ] Quality scorecard exists at `assets/checklists/quality-scorecard.md`
|
|
- [ ] Plugin is installed: appears in Claude Code's skill/command list
|
|
|
|
## /linkedin:pipeline — End-to-End Tests
|
|
|
|
### Test 1: Full Pipeline — Idea to Post
|
|
|
|
**Goal:** Execute the complete 8-step pipeline from ideation to publish-ready post.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:pipeline`
|
|
2. Verify Step 0 loads: state file read, status displayed (posts/week, streak)
|
|
3. Choose "Generate ideas for me" when prompted
|
|
4. Verify 3 topic suggestions appear, drawn from `thought-leadership-angles.md`
|
|
5. Select a topic → verify angle selection (2-3 options)
|
|
6. Choose format → verify draft follows structure (hook/context/insight/implication/CTA)
|
|
7. Verify optimization checks run:
|
|
- Hook: 110-140 chars
|
|
- Total: 1,200-1,800 chars
|
|
- No external links in body
|
|
- No corporate buzzwords
|
|
8. Verify scheduling recommendation mentions CET times
|
|
9. Verify 5x5x5 guidance is provided
|
|
10. Verify copy-paste ready output with character count and hashtags
|
|
11. Verify first-hour monitoring plan is shown
|
|
12. Verify 48-hour check-in reminder appears
|
|
|
|
**Expected outcome:** A complete, publish-ready post with all quality checks passed.
|
|
|
|
**Hooks that fire:**
|
|
- `SessionStart` → loads state
|
|
- `UserPromptSubmit` → injects context
|
|
- `PreToolUse (Write)` → quality gate + voice guardian (if draft is written to file)
|
|
- `PostToolUse (Write)` → alternative hooks + posting time suggestion
|
|
- `Stop` → state update + pre-publish reminders
|
|
|
|
### Test 2: Pipeline with Existing Topic
|
|
|
|
**Goal:** User provides their own topic, skipping ideation.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:pipeline`
|
|
2. Choose "I have an idea already"
|
|
3. Provide topic: "Why AI agents will replace workflows in 2026"
|
|
4. Verify the topic is used directly (no override)
|
|
5. Verify angle suggestions are relevant to the provided topic
|
|
6. Complete the remaining steps
|
|
|
|
**Expected outcome:** Post is created on the user's topic, not a generated one.
|
|
|
|
### Test 3: Pipeline with State File Missing
|
|
|
|
**Goal:** Graceful handling when state file doesn't exist.
|
|
|
|
**Steps:**
|
|
1. Temporarily rename `~/.claude/linkedin-thought-leadership.local.md`
|
|
2. Run `/linkedin:pipeline`
|
|
3. Verify: no crash, reasonable fallback (e.g., "No posting data found. Starting fresh.")
|
|
4. Complete the pipeline
|
|
5. Verify: state file is created after pipeline completes
|
|
|
|
**Expected outcome:** Pipeline works without state file, creates one at the end.
|
|
|
|
### Test 4: Pipeline — Draft Save Option
|
|
|
|
**Goal:** Verify "Save as draft for later" works.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:pipeline`
|
|
2. Create a post
|
|
3. At scheduling step, choose "Save as draft for later"
|
|
4. Verify: no posting reminders (5x5x5, first-hour) are shown for drafts
|
|
5. Verify: state file is NOT updated with post date (it's a draft, not published)
|
|
|
|
**Expected outcome:** Draft is saved without publishing-related actions.
|
|
|
|
---
|
|
|
|
## /linkedin:batch — End-to-End Tests
|
|
|
|
### Test 5: Full Batch — 3 Posts from One Theme
|
|
|
|
**Goal:** Create 3 posts from a single theme with varying angles and formats.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:batch`
|
|
2. Verify Step 0 loads: state file, check for existing weekly plan
|
|
3. Choose "One main theme"
|
|
4. Provide theme: "The future of AI in public sector"
|
|
5. Verify batch plan shows 3 posts with:
|
|
- Different angles (not repetitive)
|
|
- Mixed formats (not all the same)
|
|
- Different target days
|
|
6. Approve the plan
|
|
7. Verify each post:
|
|
- Follows structure (hook 110-140 chars, 1,200-1,800 total)
|
|
- Has unique angle
|
|
- Quick quality check passes
|
|
8. Verify posts are saved to `assets/drafts/week-[WXX]/`
|
|
9. Verify filenames follow pattern: `[day]-[topic-slug].md`
|
|
10. Verify YAML frontmatter in each file (planned_date, pillar, angle, format, status)
|
|
11. Verify summary shows content mix and pillar coverage
|
|
12. Approve all drafts
|
|
13. Verify posting schedule with recommended times
|
|
|
|
**Expected outcome:** 3 distinct posts saved in correct directory with proper metadata.
|
|
|
|
### Test 6: Batch — Content Pillar Mode
|
|
|
|
**Goal:** Batch using existing content pillar.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:batch`
|
|
2. Choose "Content pillar"
|
|
3. Select from user's defined pillars in skill file
|
|
4. Verify posts are created around that pillar
|
|
5. Verify angle variety (not same perspective repeated)
|
|
|
|
**Expected outcome:** All posts align with chosen pillar but explore different angles.
|
|
|
|
### Test 7: Batch — Revision Flow
|
|
|
|
**Goal:** Verify post revision during batch creation.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:batch` and create 3 posts
|
|
2. At review step, choose "Revise a specific post"
|
|
3. Ask for post #2 to be revised (e.g., "Make the hook more provocative")
|
|
4. Verify: only post #2 is changed, others remain intact
|
|
5. Verify: summary updates to reflect the revised post
|
|
|
|
**Expected outcome:** Individual post revision works without affecting other batch posts.
|
|
|
|
### Test 8: Batch — Drafts Directory Creation
|
|
|
|
**Goal:** Verify `assets/drafts/` directory is created when it doesn't exist.
|
|
|
|
**Steps:**
|
|
1. Ensure `assets/drafts/` does not exist
|
|
2. Run `/linkedin:batch` and complete the workflow
|
|
3. Verify: `assets/drafts/week-[WXX]/` directory is created
|
|
4. Verify: all posts are saved correctly
|
|
|
|
**Expected outcome:** Directory is created automatically, posts are saved.
|
|
|
|
---
|
|
|
|
## Cross-Command Integration Tests
|
|
|
|
### Test 9: Pipeline After Batch
|
|
|
|
**Goal:** Pipeline uses batch-created drafts.
|
|
|
|
**Steps:**
|
|
1. First run `/linkedin:batch` to create 3 drafts
|
|
2. Then run `/linkedin:pipeline`
|
|
3. At ideation, choose "Use a planned topic"
|
|
4. Verify: pipeline picks up a draft from the batch
|
|
5. Complete pipeline with the batch draft
|
|
6. Verify: state file is updated after publishing
|
|
|
|
**Expected outcome:** Pipeline can consume batch-created drafts seamlessly.
|
|
|
|
### Test 10: Batch Respects Weekly State
|
|
|
|
**Goal:** Batch adjusts recommendations based on current posting state.
|
|
|
|
**Steps:**
|
|
1. Set state file to show 2 posts already published this week
|
|
2. Run `/linkedin:batch` with goal of 3 posts/week
|
|
3. Verify: batch suggests creating only 1 post (3 - 2 = 1 remaining)
|
|
4. Or if configurable, verify batch mentions current progress
|
|
|
|
**Expected outcome:** Batch is aware of weekly posting status.
|
|
|
|
---
|
|
|
|
## Hook Integration Tests
|
|
|
|
### Test 11: Quality Gate Fires on Post Draft
|
|
|
|
**Goal:** Verify PreToolUse quality gate hook catches issues.
|
|
|
|
**Steps:**
|
|
1. During pipeline or batch, intentionally create a post with:
|
|
- Hook over 140 chars
|
|
- External link in body
|
|
- Corporate buzzword ("leverage")
|
|
2. Verify: quality gate flags ALL issues
|
|
3. Verify: issues are described specifically (not generic warnings)
|
|
|
|
**Expected outcome:** Quality gate catches all three violations with specific feedback.
|
|
|
|
### Test 12: Voice Guardian Detects AI Patterns
|
|
|
|
**Goal:** Verify voice guardian hook catches AI-sounding content.
|
|
|
|
**Steps:**
|
|
1. During pipeline, create a post that starts with "In today's rapidly evolving landscape..."
|
|
2. Verify: voice guardian flags the AI pattern
|
|
3. Verify: specific rewrite suggestions are provided
|
|
4. Verify: voice samples are referenced for comparison (if they exist)
|
|
|
|
**Expected outcome:** Voice guardian identifies AI patterns and suggests authentic alternatives.
|
|
|
|
### Test 13: Stop Hook Updates State
|
|
|
|
**Goal:** Verify session-end state update works correctly.
|
|
|
|
**Steps:**
|
|
1. Run `/linkedin:pipeline` and create a post
|
|
2. Note the topic and hook
|
|
3. End the session (or let Stop hook fire)
|
|
4. Read `~/.claude/linkedin-thought-leadership.local.md`
|
|
5. Verify:
|
|
- `last_post_date` = today
|
|
- `last_post_topic` = the topic used
|
|
- `posts_this_week` incremented
|
|
- `current_streak` updated correctly
|
|
- Recent Posts section has new entry
|
|
|
|
**Expected outcome:** State file accurately reflects the session's output.
|
|
|
|
### Test 14: PostToolUse Generates Alternative Hooks
|
|
|
|
**Goal:** Verify post-creation automation fires.
|
|
|
|
**Steps:**
|
|
1. During pipeline or batch, write a post draft
|
|
2. Verify: 3 alternative hooks are generated
|
|
3. Verify: each alternative has character count shown
|
|
4. Verify: optimal posting time is suggested
|
|
5. Verify: 5x5x5 reminder appears
|
|
|
|
**Expected outcome:** Post-creation automation provides actionable suggestions.
|
|
|
|
---
|
|
|
|
## Agent Tests
|
|
|
|
### Test 15: Post-Feedback Monitor — Basic Monitoring
|
|
**Command:** Trigger `post-feedback-monitor` agent
|
|
**Steps:**
|
|
1. Say "How is my latest post doing?"
|
|
2. Agent should load algorithm-signals-reference and engagement-frameworks
|
|
3. Agent should ask which post to monitor
|
|
4. Provide sample metrics: 500 impressions, 15 reactions, 3 comments, 1 repost
|
|
5. Agent should identify the current phase and provide benchmarks
|
|
**Expected:** Structured output with metrics snapshot, velocity score, anomaly detection, and recommended actions
|
|
**Validates:** Agent file loads correctly, context loading works, output format matches spec
|
|
|
|
### Test 16: Post-Feedback Monitor — Anomaly Detection
|
|
**Command:** Trigger `post-feedback-monitor` agent
|
|
**Steps:**
|
|
1. Say "My post has 2000 impressions but only 5 reactions"
|
|
2. Agent should detect "Impression-Engagement Gap" anomaly
|
|
3. Agent should provide specific intervention recommendations
|
|
**Expected:** Anomaly correctly identified with cause analysis and action plan
|
|
**Validates:** Anomaly detection framework, intervention playbook
|
|
|
|
### Test 17: Post-Feedback Monitor — Golden Hour
|
|
**Command:** Trigger `post-feedback-monitor` agent
|
|
**Steps:**
|
|
1. Say "I just posted 30 minutes ago, what should I do?"
|
|
2. Agent should activate Golden Hour protocol
|
|
3. Agent should provide time-sensitive action items
|
|
**Expected:** Golden Hour specific advice (reply within 5 min, DM connections, first comment strategy)
|
|
**Validates:** Phase detection, time-sensitive interventions
|
|
|
|
---
|
|
|
|
## Command Tests
|
|
|
|
### Test 18: A/B Test — Design New Test
|
|
**Command:** `/linkedin:ab-test`
|
|
**Steps:**
|
|
1. Run the command
|
|
2. Select "Design a new A/B test"
|
|
3. Choose "Hook/Opening line" as the variable
|
|
4. Follow the guided workflow
|
|
**Expected:** Complete test plan with hypothesis, variants, execution schedule, success criteria
|
|
**Validates:** Command loads, AskUserQuestion flow works, reference file loads, test plan file created
|
|
|
|
### Test 19: A/B Test — Analyze Results
|
|
**Command:** `/linkedin:ab-test`
|
|
**Steps:**
|
|
1. First create a test plan (Test 18) and manually create a test file with sample data
|
|
2. Run `/linkedin:ab-test` and select "Analyze test results"
|
|
3. Select the test to analyze
|
|
**Expected:** Results comparison table, significance assessment (20% rule), verdict, recommended next steps
|
|
**Validates:** File scanning, data analysis, result formatting
|
|
|
|
### Test 20: Enhanced Report — Trends & Alerts
|
|
**Command:** `/linkedin:report`
|
|
**Steps:**
|
|
1. Ensure at least 4 weeks of imported data exists
|
|
2. Run `/linkedin:report` for the current week
|
|
3. Verify trend analysis section appears after main report
|
|
4. Verify alert detection section appears
|
|
**Expected:** 4-week trend table, trend interpretation, performance alerts, algorithm alerts
|
|
**Validates:** Trend CLI integration, alert thresholds, formatting
|
|
|
|
### Test 21: Enhanced Import — Anomaly Detection
|
|
**Command:** `/linkedin:import`
|
|
**Steps:**
|
|
1. Ensure baseline data exists (previous imports)
|
|
2. Import a new CSV export
|
|
3. After import, verify anomaly detection runs
|
|
**Expected:** Breakout posts flagged, patterns detected, intelligent next steps offered
|
|
**Validates:** Anomaly detection rules, baseline comparison, conditional suggestions
|
|
|
|
### Test 22: Enhanced Report — Markdown Export
|
|
**Command:** `/linkedin:report`
|
|
**Steps:**
|
|
1. Run `/linkedin:report` for any week with data
|
|
2. Select "Export as Markdown" from options
|
|
3. Verify file is saved to `assets/analytics/weekly-reports/YYYY-WXX-report.md`
|
|
**Expected:** Clean markdown file with all sections (metrics, trends, alerts, top performers, recommendations)
|
|
**Validates:** Export template, file creation, gitignore compliance
|
|
|
|
---
|
|
|
|
## Cross-Command Integration Tests
|
|
|
|
### Test 23: Router — New Commands Accessible
|
|
**Command:** `/linkedin`
|
|
**Steps:**
|
|
1. Run `/linkedin`
|
|
2. Verify A/B test appears in command menu
|
|
3. Verify post-feedback-monitor appears in agent suggestions
|
|
4. Say "I want to A/B test my hooks" — should route to `/linkedin:ab-test`
|
|
5. Say "How is my post doing?" — should route to `post-feedback-monitor`
|
|
**Expected:** All new commands and agents are accessible through the router
|
|
**Validates:** Router updates, intent matching
|
|
|
|
### Test 24: Collaboration — Multi-Author Workflow
|
|
**Command:** `/linkedin:collab`
|
|
**Steps:**
|
|
1. Run `/linkedin:collab` and complete readiness check
|
|
2. Navigate to multi-author content coordination section
|
|
3. Verify co-creation workflow templates are available
|
|
4. Verify collaboration tracking section exists
|
|
**Expected:** Multi-author workflow with 5 phases, shared draft guidelines, collaboration pipeline board
|
|
**Validates:** New collab command sections (Step 7 and Step 8)
|
|
|
|
---
|
|
|
|
## Known Limitations
|
|
|
|
1. **No automated testing:** These commands are conversational — they require human interaction at AskUserQuestion steps. Testing must be manual.
|
|
|
|
2. **State file format:** State file uses YAML frontmatter. Any malformed YAML will cause parsing issues. Always validate format after manual edits.
|
|
|
|
3. **Draft directory:** `assets/drafts/` and `assets/plans/` are created at runtime. They don't exist in the base plugin directory and won't appear until first use.
|
|
|
|
4. **Hook ordering:** PreToolUse has two hooks (quality gate + voice guardian). Both fire on every Write/Edit of content files. If one blocks, the user must fix the issue before proceeding.
|
|
|
|
5. **Content vs. config detection:** All prompt-based hooks include logic to skip non-content files. This relies on heuristic pattern matching (checking for `.local.md`, `.json`, script extensions, etc.). Edge cases may exist.
|
|
|
|
6. **Agent testing:** Agents (Tests 15-17) are triggered conversationally, not via slash commands. They require natural language input and cannot be invoked deterministically. Test by using the trigger phrases documented in the agent frontmatter.
|
|
|
|
7. **Structure validation:** Use `scripts/test-runner.sh` to validate file existence, frontmatter format, and router completeness. This is automated and complements the manual integration tests above.
|
|
|
|
## Test Results Log
|
|
|
|
Record results here when tests are executed:
|
|
|
|
| Test | Date | Result | Notes |
|
|
|------|------|--------|-------|
|
|
| 1 | | | |
|
|
| 2 | | | |
|
|
| 3 | | | |
|
|
| 4 | | | |
|
|
| 5 | | | |
|
|
| 6 | | | |
|
|
| 7 | | | |
|
|
| 8 | | | |
|
|
| 9 | | | |
|
|
| 10 | | | |
|
|
| 11 | | | |
|
|
| 12 | | | |
|
|
| 13 | | | |
|
|
| 14 | | | |
|
|
| 15 | | | |
|
|
| 16 | | | |
|
|
| 17 | | | |
|
|
| 18 | | | |
|
|
| 19 | | | |
|
|
| 20 | | | |
|
|
| 21 | | | |
|
|
| 22 | | | |
|
|
| 23 | | | |
|
|
| 24 | | | |
|