feat(ms-ai-architect): v1.12.0 manuell KB-refresh — fjern launchd/cron-arkitektur

ToS-vurdering konkluderte med at autonom cron-kjøring er unødvendig kompleks
for en solo-fork-and-own-plugin. Apply-fasen krever LLM-resonnering uansett,
så manuell trigger fra en aktiv Claude Code-sesjon er enklere og holder
pluginen klart innenfor Anthropic Consumer Terms paragraf 3 (automated access
only via API key or where explicitly permitted — Claude Code CLI er
eksemptert som offisielt verktøy).

Lagt til:
- commands/kb-update.md — ny /architect:kb-update slash-kommando som driver
  poll, endringsrapport, microsoft_docs_fetch-update og commit fra sesjonen.
  Argumenter: --skip-discover, --priorities, --dry-run, --single-commit
- Catalog-entry i playground HTML for kb-update (categori: tool, 4 input-felt)

Slettet (Wave 3-5 reversert, ~1500 linjer + 7 testmoduler):
- scripts/install-kb-cron.mjs (cross-OS scheduler-installer)
- scripts/kb-update/weekly-kb-cron.mjs (cron-orkestrator med pre-flight, lock,
  backup, claude -p subprocess, post-run verify, rollback)
- scripts/kb-update/templates/ (4 scheduler-templates: launchd plist, systemd
  service+timer, Windows ps1 + README)
- scripts/kb-update/lib/auth-mode.mjs (cron-spesifikk auth validation)
- scripts/kb-update/lib/lock-file.mjs (PID+mtime stale-detection)
- scripts/kb-update/lib/cost-estimat.mjs (pre-flight budget-cap)
- 7 testmoduler under tests/kb-update/ for slettet kode
- tests/test-kb-update.sh (Bash-3.2-shim, erstattet av direkte node --test)

Beholdt (utility-laget fortsatt brukbart):
- run-weekly-update.mjs, report-changes.mjs, build-registry.mjs,
  discover-new-urls.mjs (KB change-detection-pipelinen)
- lib/atomic-write, lib/backup, lib/cross-platform-paths, lib/log-rotate
- 4 testmoduler (42/42 tester PASS)

Endret:
- hooks/scripts/session-start-context.mjs: fjern kb-update-status.json-overvaaking
- tests/run-e2e.sh --kb-update kaller node --test direkte i stedet for shim
- README.md, CLAUDE.md: KB-vedlikehold-seksjon rewriter for manuell modell
- plugin.json: 1.11.0 -> 1.12.0
- Rot README + CLAUDE.md: ms-ai-architect-versjon bumpet

Schedulering er bevisst utenfor scope og overlatt til brukeren — eventuelle
forks som vil ha periodisk varsling kan sette opp egen cron / launchd /
GitHub Actions som kjører rapport-fasen og varsler om aa kjore
/architect:kb-update i CC-sesjon.

Verifisering:
- bash tests/validate-plugin.sh: 219 PASS, 0 FAIL
- bash tests/run-e2e.sh --kb-update: 42/42 inner + suite PASS
- bash tests/run-e2e.sh --playground: 271/271 PASS (statisk + parsers)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
This commit is contained in:
Kjell Tore Guttormsen 2026-05-05 12:03:45 +02:00
commit a7a334c8d1
29 changed files with 238 additions and 2708 deletions

View file

@ -11,7 +11,7 @@ plugins/
graceful-handoff/ v2.1.0 — Auto-trigger handoff via Stop hook (skill + JSON pipeline + 4-step model-aware context resolution)
linkedin-thought-leadership/ v1.2.0 — LinkedIn content pipeline + analytics
llm-security/ v6.0.0 — Security scanning, auditing, threat modeling
ms-ai-architect/ v1.8.0 — Microsoft AI architecture (Cosmo Skyberg persona)
ms-ai-architect/ v1.12.0 — Microsoft AI architecture (Cosmo Skyberg persona) + manual KB-refresh slash command
okr/ v1.0.0 — OKR guidance for Norwegian public sector
ultraplan-local/ v3.4.0 — Brief, research, plan, execute, review, continue (six-command universal pipeline + multi-session resumption + --gates autonomy chain)

View file

@ -158,7 +158,7 @@ Key command: `/graceful-handoff [topic-slug] [--no-commit] [--no-push] [--dry-ru
---
### [MS AI Architect — Azure AI and Microsoft Foundry](plugins/ms-ai-architect/) `v1.11.0` `🇳🇴 Norwegian`
### [MS AI Architect — Azure AI and Microsoft Foundry](plugins/ms-ai-architect/) `v1.12.0` `🇳🇴 Norwegian`
Microsoft AI solution architecture guidance for Norwegian public sector and enterprise.
@ -167,11 +167,11 @@ Meet Cosmo Skyberg — a structured architect persona who understands the proble
- **Structured advisory** — 7-phase methodology from business need to architecture recommendation and optional diagram
- **Regulatory assessments** — ROS analysis (NS 5814), DPIA/PVK, security scoring (6×5), EU AI Act classification, cost estimation in NOK (P10/P50/P90)
- **Norwegian public sector** — Digdir architecture principles, Utredningsinstruksen, NSM, Schrems II data residency, EU AI Act compliance workflow
- **Automated freshness** — sitemap-based change detection polls Microsoft Learn weekly, flags which reference files need updating based on source page changes, and discovers new relevant pages
- **Manual KB-refresh**`/architect:kb-update` slash command drives sitemap-based change detection + new-URL discovery + per-file `microsoft_docs_fetch`-update + commit, run from an active Claude Code session. Scheduling is intentionally out of scope and left to the user (cron / launchd / GitHub Actions etc. as desired)
Key commands: `/architect`, `/architect:ros`, `/architect:security`, `/architect:dpia`, `/architect:utredning`, `/architect:cost`
12 specialized agents · 24 commands · 5 skills (387 reference docs) · 2 hooks · sitemap-based KB monitoring
12 specialized agents · 25 commands · 5 skills (387 reference docs) · 2 hooks · manual sitemap-driven KB refresh
**One-click demo (v1.11.0, 2026-05-04):** "Last inn demo-data"-knappen på onboarding bootstrapper en ferdig "Acme Kommune" med demo-prosjektet "Acme: Kunde-chatbot" og alle 17 rapport-typer pre-importert som `raw_markdown` (konsistente navn på tvers av alle fixtures). Visualisering rehydreres automatisk på project-surface mount. 24 retina-screenshots committed under `playground/screenshots/v1.11.0/` (12 surfaces × 2 tema), så forkere ser pluginen uten å kjøre noe. Standalone Playwright-runner under `tests/screenshot/` (egen `package.json`).

View file

@ -1,6 +1,6 @@
{
"name": "ms-ai-architect",
"version": "1.11.0",
"version": "1.12.0",
"description": "Microsoft AI Solution Architect - structured architecture guidance for the full Microsoft AI stack",
"author": {
"name": "Kjell Tore Guttormsen"

View file

@ -5,6 +5,41 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.12.0] - 2026-05-05
### Added — Manuell KB-refresh-arbeidsflyt
- **`commands/kb-update.md`** — ny `/architect:kb-update` slash-kommando som driver hele KB-oppdaterings-løypen fra en aktiv Claude Code-sesjon: `run-weekly-update.mjs --force --discover``report-changes.mjs` → per-fil `microsoft_docs_fetch``Edit`-baserte oppdateringer → git commit. Argumenter: `--skip-discover`, `--priorities <list>`, `--dry-run`, `--single-commit`. Default-prioritet: `critical,high`. Schedulering er eksplisitt utenfor scope og overlatt til brukeren.
### Removed — launchd/cron-arkitektur (Wave 3-5 reversert)
Etter ToS-gjennomgang (Anthropic Consumer Terms § 3 — automated access only via API key or where explicitly permitted) ble den autonome cron-arkitekturen vurdert som unødvendig kompleks for en solo-fork-and-own-plugin. Apply-fasen krever LLM-resonnering uansett, så manuell trigger fra en aktiv Claude Code-sesjon er både enklere og holder pluginen klart innenfor ToS.
- `scripts/install-kb-cron.mjs` (~400 linjer) — cross-OS cron-installer (launchd/systemd/Windows Task Scheduler)
- `scripts/kb-update/weekly-kb-cron.mjs` (~600 linjer) — cron-orkestrator med pre-flight, lock, backup, claude -p subprocess, post-run verify, rollback
- `scripts/kb-update/templates/` — 4 scheduler-templates (`com.fromaitochitta.ms-ai-architect.kb-update.plist`, `ms-ai-architect-kb-update.{service,timer,ps1}`) + README
- `scripts/kb-update/lib/auth-mode.mjs` (~100 linjer) — `detectAuthMode` + `validateAuthForCron` (kun cron-spesifikk validering)
- `scripts/kb-update/lib/lock-file.mjs` (~120 linjer) — PID+mtime stale-detection (kun for cron-kollisjons-vern)
- `scripts/kb-update/lib/cost-estimat.mjs` (~80 linjer) — pre-flight budget-cap-logikk (kun for api-key cron-kjøringer)
- `tests/kb-update/test-{auth-mode,lock-file,cost-estimat,install-cron,session-start-status,template-generation,weekly-kb-cron-flags}.test.mjs` — 7 testmoduler for slettet kode (~68 testtilfeller)
- `tests/test-kb-update.sh` — Wave 5 Bash-3.2-shim, erstattet av direkte `node --test`-kall i `run-e2e.sh`
- `hooks/scripts/session-start-context.mjs`: `kb-update-status.json`-overvåking (Wave 3 Step 10) + ubrukt `getCacheDir`-import fjernet
Total: ~1500+ linjer kode + 7 testmoduler ut. Beholdte utilities (`atomic-write`, `backup`, `cross-platform-paths`, `log-rotate`) er fortsatt brukbare av `run-weekly-update`-scriptet og kan gjenbrukes i fremtidige skripts.
### Changed
- `tests/run-e2e.sh --kb-update` kaller nå `node --test tests/kb-update/*.test.mjs` direkte (4 testmoduler, 42 tester) i stedet for `bash tests/test-kb-update.sh`-shim
- `README.md` — "Knowledge Base Maintenance"-seksjonen rewriter for manuell modell, scheduling-disclaimer lagt til
- `CLAUDE.md` — KB-ferskhet-seksjonen oppdatert, `/architect:kb-update` lagt til i kommando-tabellen
### Notes on 1.12.0
- ToS-vurdering: kjøring fra aktiv Claude Code-sesjon faller innenfor "Claude Code CLI is exempted from the prohibition on automated access" per [Anthropic auth-docs](https://code.claude.com/docs/en/authentication) og Consumer Terms § 3. Apply-fasen kan ikke automatiseres innenfor pluginens scope — eventuell schedulering er forkers eget ansvar
- Forbruk: én manuell kjøring (default `--priorities critical,high`) henter ~80 Microsoft Learn-sider via `microsoft_docs_fetch` og oppdaterer 9-53 filer. Kvote-bruk avhenger av abonnementets rate-limits — typisk innenfor en daglig Pro/Max-allowance
- 42/42 KB-update-utility-tester PASS. 271/271 playground-tester uendret
- `data/change-report.json` blir værende på disk for diagnose etter hver kjøring
## [1.11.0] - 2026-05-04
### Added — Design-system 100%-adoption + visual upgrade

View file

@ -43,6 +43,7 @@ Tilbyr strukturert arkitekturveiledning for Microsoft AI-stakken:
| `/architect:frimpact` | FRIA (Art. 27) — obligatorisk for offentlig sektor |
| `/architect:conformity` | Samsvarsvurdering (Art. 43) — sjekkliste + erklæring |
| `/architect:onboard` | Onboard pluginen med virksomhetsspesifikk kontekst |
| `/architect:kb-update` | Manuell KB-refresh — poller sitemaps, oppdaterer endrede filer via `microsoft_docs_fetch`, committer |
## Agenter
@ -121,25 +122,39 @@ Se `references/architecture/recommended-mcp-servers.md` for detaljer.
bash tests/validate-plugin.sh
```
#### KB-ferskhet (sitemap-basert)
#### KB-ferskhet (sitemap-basert, manuell drift)
**Apply-fasen kjøres via slash-kommandoen** (krever aktiv Claude Code-sesjon, holder oss innenfor Anthropic Consumer Terms § 3):
```text
/architect:kb-update # default: critical + high
/architect:kb-update --priorities critical # bare critical
/architect:kb-update --skip-discover # hopp over new-URL discovery
/architect:kb-update --dry-run # rapport uten apply
```
**Endringsrapport-fasen kan kjøres som rent Node-script (ingen LLM-kostnad):**
```bash
# Ukentlig oppdatering: poll sitemaps → endringsrapport
# Poll sitemaps → endringsrapport (ingen filendringer)
node scripts/kb-update/run-weekly-update.mjs --force
# Med discovery av nye relevante sider
node scripts/kb-update/run-weekly-update.mjs --force --discover
# Kun endringsrapport (etter polling)
# Vis rapport på nytt etter polling
node scripts/kb-update/report-changes.mjs
# Bygg/oppdater URL-registry fra referansefiler
node scripts/kb-update/build-registry.mjs [--merge]
```
Systemet poller Microsoft Learn sitemaps ukentlig, sammenligner `<lastmod>` med filenes `Last updated:` header, og genererer en prioritert endringsrapport. Session-start hook trigger bakgrunns-poll automatisk hvis >7 dager siden siste.
Systemet sammenligner Microsoft Learn sitemap-`<lastmod>` med filenes `Last updated:` header, og genererer en prioritert endringsrapport (critical/high/medium/low).
**Match rate:** ~69% av 1342 URLer matche mot sitemaps. ~31% (mest `azure/ai-foundry/openai/`-stier) finnes ikke i sitemaps pga. Microsofts URL-restrukturering.
**Schedulering:** Pluginen schedulerer ingenting. Bruker som vil ha periodisk varsling kan sette opp egen cron / launchd / systemd / GitHub Actions som kjører `node scripts/kb-update/run-weekly-update.mjs --force --discover` (rapport-fasen, ikke apply). Apply-fasen er bevisst manuell — den krever LLM-resonnering på diff og kjører fra en åpen Claude Code-sesjon.
Legacy (deprecated):
```bash
bash scripts/kb-staleness-check.sh # mtime-basert, upålitelig etter git clone
@ -177,7 +192,7 @@ claude --plugin ./plugins/ms-ai-architect
Interaktiv decision-builder + rapport-viewer for Microsoft AI-beslutninger. Erstatter v2 5-stegs-pipelinen med en multi-surface-app som persisterer state og visualiserer importerte rapporter inline. Spec: v3-arkitektur dokumentert under `.claude/projects/2026-05-03-playground-v3-architecture/`. v1.10.0-utvidelser dokumentert under `.claude/projects/2026-05-03-ms-ai-architect-v1-10-playground/`. v1.11.0 leverer design-system 100%-adoption (PARALLEL-CSS-migrasjon til DS-konvensjon, inline `<style>`-trim 37%, severity-coded card borders, app-header-restruktur, `.stack-lg` body spacing, AI Act-pyramide bredde-fix).
- **Fil:** `playground/ms-ai-architect-playground.html` (~3870+ linjer, single-file v3-arkitektur)
- **4 surfaces:** Onboarding (18 felles felt — 4 strukturerte / 14 fritekst etter v1.10.0) → Home (prosjekt-liste + 3 entry-tracks) → Catalog (24 commands gruppert i 5 expansion-grupper med søk) → Project (per-prosjekt tabs, command-form-prefill fra felles state, paste-back-import med rapport-visualisering)
- **4 surfaces:** Onboarding (18 felles felt — 4 strukturerte / 14 fritekst etter v1.10.0) → Home (prosjekt-liste + 3 entry-tracks) → Catalog (25 commands gruppert i 5 expansion-grupper med søk) → Project (per-prosjekt tabs, command-form-prefill fra felles state, paste-back-import med rapport-visualisering)
- **Persistens:** IndexedDB-primær med localStorage-fallback. Schema-versjonert (`STATE_KEY = 'ms-ai-architect-state-v1'`) med eager `MIGRATIONS`-pipeline. v1.10.0 introduserer `dataVersion v1→v2`-migrasjon (idempotent) som backfill-er `verdict`+`keyStats`.
- **17 rapport-renderers (felles grunnskjelett):** Alle wrapper output via `renderPageShell()` med eyebrow + h1 + valgfri verdict-pill + valgfri key-stats-grid + arketype-spesifikk body. Parser → struktur → HTML rutet via kanonisk archetype-routing-tabell.
- **Foundation-helpers:** `renderPageShell`, `renderVerdictPill`, `renderKeyStatsGrid`, `inferVerdict`, `inferKeyStats`, `KEY_STATS_CONFIG`.
@ -189,7 +204,7 @@ Interaktiv decision-builder + rapport-viewer for Microsoft AI-beslutninger. Erst
| Test | Kommando | Dekning |
|------|----------|---------|
| Statisk struktur | `bash tests/test-playground-v3.sh` | 201 PASS — vendored CSS, surfaces, 24 commands, 14 parsere, 17 renderers (felles grunnskjelett), design-system-klasser, action-handlers, Tier 3-bruk, onboarding field-distribution |
| Statisk struktur | `bash tests/test-playground-v3.sh` | 202 PASS — vendored CSS, surfaces, 25 commands, 14 parsere, 17 renderers (felles grunnskjelett), design-system-klasser, action-handlers, Tier 3-bruk, onboarding field-distribution |
| Parser-fixtures | `bash tests/test-playground-parsers.sh` | 70 PASS — 17 fixtures × parser-routing |
| Migrasjon | `bash tests/test-playground-migrations.sh` | 7 PASS — v1→v2 idempotent migrasjon |
| Kombinert (E2E) | `bash tests/run-e2e.sh --playground` | statisk + parser-suiter |

View file

@ -6,7 +6,7 @@
*AI-generated: all code produced by Claude Code through dialog-driven development. [Full disclosure →](../../README.md#ai-generated-code-disclosure)*
![Version](https://img.shields.io/badge/version-1.11.0-blue)
![Version](https://img.shields.io/badge/version-1.12.0-blue)
![Platform](https://img.shields.io/badge/platform-Claude_Code_Plugin-purple)
![Docs](https://img.shields.io/badge/reference_docs-387-green)
![Agents](https://img.shields.io/badge/agents-12-orange)
@ -231,7 +231,7 @@ Norwegian public sector governance (Digdir, DFØ), EU AI Act (Annex III checklis
BCDR planning, hybrid and edge deployment, sovereign cloud (Norway regions), network architecture, monitoring and observability.
> [!NOTE]
> All reference documents are generated and verified via the Microsoft Learn MCP server. A weekly cron job (`scripts/kb-update/weekly-kb-cron.mjs`) automatically polls Microsoft Learn sitemaps for changes, updates stale files via MCP research, and commits to the repository. Last full update: April 2026. Manual refresh: `/architect:generate-skills --update`.
> All reference documents are generated and verified via the Microsoft Learn MCP server. KB-ferskhet er manuell — kjør `/architect:kb-update` i en Claude Code-sesjon for å polle Microsoft Learn-sitemaps, sammenligne mot lokale `Last updated:`-headere, oppdatere endrede filer, og committe. Last full update: April 2026.
---
@ -581,27 +581,41 @@ bash tests/capture-fixture.sh <source-file> <section-header> <output-dir>
### Knowledge Base Maintenance
The 387 reference documents are actively maintained by the plugin author. Updated reference files are published as regular commits to the marketplace repository. If you installed via `claude plugin marketplace add`, updates are pulled automatically — no manual action needed.
The 387+ reference documents are actively maintained by the plugin author. Updated reference files are published as regular commits to the marketplace repository. If you installed via `claude plugin marketplace add`, updates are pulled automatically — no manual action needed.
The plugin includes a sitemap-based change detection system that tracks when Microsoft Learn source pages are updated, ensuring the author is always aware of which reference files need refreshing.
For forks (or if you simply want to refresh the KB yourself), the plugin ships with a sitemap-based change-detection pipeline plus a slash command that drives the apply-fasen via the active Claude Code session.
**Automated change detection (sitemap-based):**
**Manuell kjøring (anbefalt):**
```bash
# Weekly update: poll sitemaps → compare → generate change report
node scripts/kb-update/run-weekly-update.mjs --force
# Include discovery of new relevant pages
node scripts/kb-update/run-weekly-update.mjs --force --discover
# View change report only (after polling)
node scripts/kb-update/report-changes.mjs
```text
/architect:kb-update # default: critical + high priorities
/architect:kb-update --priorities critical # bare critical
/architect:kb-update --skip-discover # hopp over new-URL-discovery
/architect:kb-update --dry-run # rapport uten å oppdatere filer
```
The session-start hook automatically triggers a background poll if >7 days since the last check.
Kommandoen poller sitemaps, sammenligner `<lastmod>` mot lokale `Last updated:`-headere, henter ferskt innhold via `microsoft_docs_fetch`, oppdaterer relevante filer og committer endringene.
**Bare endringsrapporten (uten apply-fasen):**
```bash
# Poll + change-report (ingen filendringer)
node scripts/kb-update/run-weekly-update.mjs --force
# Med discovery av nye relevante sider
node scripts/kb-update/run-weekly-update.mjs --force --discover
# Vis rapport på nytt etter polling
node scripts/kb-update/report-changes.mjs
# Bygg URL-registry fra referansefiler
node scripts/kb-update/build-registry.mjs [--merge]
```
**Schedulering er brukerens valg.** Pluginen schedulerer ingenting. Hvis du vil kjøre periodisk poll + varsling, sett opp en cron-jobb / launchd-jobb / systemd-timer / GitHub Actions-workflow som kjører `node scripts/kb-update/run-weekly-update.mjs --force --discover` og varsler deg om å åpne Claude Code og kjøre `/architect:kb-update`. Apply-fasen (LLM-drevet content-update + commits) kjører kun fra en aktiv Claude Code-sesjon — det er bevisst og holder oss godt innenfor Anthropic Consumer Terms § 3 (automated access only via API key or where explicitly permitted).
**How it works:**
1. `build-registry.mjs` extracts 1342 unique `learn.microsoft.com` URLs from reference files
1. `build-registry.mjs` extracts 1342+ unique `learn.microsoft.com` URLs from reference files
2. `poll-sitemaps.mjs` fetches Microsoft Learn sitemaps and compares `<lastmod>` dates
3. `report-changes.mjs` generates a prioritized list of files needing update
4. `discover-new-urls.mjs` finds relevant new pages not yet covered
@ -624,6 +638,7 @@ Category-to-skill routing is defined in `scripts/skill-gen/category-skill-map.js
| Version | Date | Highlights |
|---------|------|-----------|
| **1.12.0** | 2026-05-05 | Manuell KB-refresh-arbeidsflyt — ny `/architect:kb-update` slash-kommando som driver poll → endringsrapport → `microsoft_docs_fetch`-oppdatering → commit fra en aktiv Claude Code-sesjon. Schedulering er bevisst utenfor scope og overlatt til brukeren. Tidligere launchd/cron-arkitektur (Wave 3-5: install-kb-cron, weekly-kb-cron, plist/systemd/Windows-templates, auth-mode-validation, lock-file, cost-cap, kb-update-status surfacing i session-start-hook) fjernet — ~1500 linjer kode + 7 testmoduler ut. Holder pluginen klart innenfor Anthropic Consumer Terms § 3 (automated access only via API key or where explicitly permitted). Beholdte utilities (atomic-write, backup, cross-platform-paths, log-rotate) + run-weekly-update + report-changes + build-registry + discover-new-urls fortsatt fullt funksjonelle for change-detection-fasen. 42/42 KB-update-tester PASS. |
| **1.11.0** | 2026-05-04 | Design-system 100%-adoption — 13 generic components hoisted to shared playground-design-system v0.3.0, all PARALLEL CSS names migrated to DS conventions, inline `<style>` block trimmed 37% (202 → 127 lines), severity-coded card borders on report cards, app-header restructure with breadcrumb, `.stack-lg` body spacing across home/project/catalog, AI Act pyramide width fix. Demo state renamed to "Acme Kommune" + "Acme: Kunde-chatbot" for cross-fixture consistency. 24 v1.11.0 screenshots regenerated. 278/278 playground E2E PASS. |
| **1.6.0** | 2026-02-19 | ROS analysis command and agent (`/architect:ros`) — 7-dimension risk assessment with NS 5814/ISO 31000 methodology, 49-threat AI threat library, sector-specific checklists (health, transport, finance, justice, education), MAESTRO multi-agent security model, 7 new KB reference documents (3,131 lines), E2E test suite (24 checks), summary-agent integration |
| **1.5.0** | 2025-02-13 | E2E regression tests (43 checks across 3 suites), auto onboarding detection at session start, systematic KB update process with staleness policy and `--json` output |

View file

@ -0,0 +1,119 @@
---
name: architect:kb-update
description: Manuell oppdatering av kunnskapsbasen — poller Microsoft Learn-sitemaps, sammenligner mot lokale `Last updated`-headere, oppdaterer endrede filer og oppdager nye relevante URLer
argument-hint: "[valgfritt: --skip-discover | --priorities critical,high,medium,low | --dry-run]"
allowed-tools: Bash, Read, Edit, Write, Glob, Grep, mcp__microsoft-learn__microsoft_docs_search, mcp__microsoft-learn__microsoft_docs_fetch, mcp__microsoft-learn__microsoft_code_sample_search
model: opus
---
# /architect:kb-update — Manuell KB-oppdatering
Holder Microsoft AI-kunnskapsbasen i `skills/*/references/` ferskt ved å sammenligne lokale referansefiler mot Microsoft Learn-sitemaps. **All kjøring er manuell** — pluginen schedulerer ingenting, og brukere som ønsker periodisk kjøring sørger for det selv (cron, launchd, GitHub Actions, etc.).
## Hva kommandoen gjør
1. **Polle sitemaps:** kjører `node scripts/kb-update/run-weekly-update.mjs --force` for å hente fersk `<lastmod>` for hver Microsoft Learn-URL i registeret
2. **Optional discovery:** med default `--discover` finner nye relevante URLer i sitemap som ikke er i registeret (`scripts/kb-update/discover-new-urls.mjs --limit 500`)
3. **Generere endringsrapport:** `report-changes.mjs` produserer `data/change-report.json` med per-fil prioritering (critical/high/medium/low) basert på antall endrede kilder + alder på lokal fil
4. **Vise rapporten:** lese rapport, presentere oppsummering til bruker, vente på `go`
5. **Oppdatere filer:** for hver fil i valgt prioritetsbøtte (default: critical + high):
- Hente fersk innhold fra alle endrede kildene via `microsoft_docs_fetch`
- Oppdatere relevante seksjoner i den lokale `.md`-fila
- Oppdatere `Last updated:`-header til dagens dato
6. **Committe:** én git-commit per fil med `chore(ms-ai-architect): refresh KB <fil> [skip-docs]`-format (eller én samlet commit om brukeren foretrekker det)
## Argumenter
| Flagg | Effekt |
|-------|--------|
| `--skip-discover` | Hopp over discovery-passet (raskere, ingen nye URLer oppdages) |
| `--priorities <list>` | Komma-separert subset av `critical,high,medium,low`. Default: `critical,high` |
| `--dry-run` | Generer rapport, men ikke oppdater filer eller committ |
| `--single-commit` | Samle alle filendringer i én commit i stedet for én per fil |
## Instruksjoner til assistenten
### 1. Pre-flight
- `pwd` — bekreft at du står i `plugins/ms-ai-architect/` (eller delegere via absolutt sti)
- `git status --porcelain | grep -E '\.md$' && echo "WARN: ucommittede skill-endringer — kommandoen vil blande dem inn"` — advar bruker hvis det finnes lokale skill-endringer
- Parse argumenter
### 2. Kjør pollingsfasen
```bash
node scripts/kb-update/run-weekly-update.mjs --force${ARG_DISCOVER}
```
Hvor `${ARG_DISCOVER}` er `--discover` med mindre `--skip-discover` ble gitt.
Output forventes å skrive `data/change-report.json` og evt. nye registry-entries hvis discovery kjørte.
### 3. Vis rapport-oppsummering
```bash
node scripts/kb-update/report-changes.mjs | head -40
```
Presenter til bruker:
- Antall filer per prioritet
- Hvilke prioriteter som blir behandlet (default: critical + high)
- Estimert antall `microsoft_docs_fetch`-kall (≈ sum av endrede kilder per fil)
- Spør: "Fortsett med oppdatering? (y/n)"
Hvis `--dry-run`: stopp her, ikke oppdater filer.
### 4. Per-fil oppdatering (etter brukerens `y`)
For hver fil i valgte prioriteter:
a. **Les nåværende fil:** `Read` på filstien
b. **Hent oppdaterte kilder:** for hver URL i `change-report.json[file].changed_urls`, kjør `microsoft_docs_fetch` på URLen
c. **Identifiser endringer:** sammenlign hentet markdown mot eksisterende seksjoner i fila. Fokuser på faktuelle endringer (ny info, oppdaterte features, deprecation-varsler) — ikke små formuleringsendringer
d. **Oppdater fila:** `Edit` med relevante endringer. Behold "For Cosmo"-seksjonen og overordnet struktur. Oppdater `Last updated: YYYY-MM-DD`-header til dagens dato
e. **Committ:** `git add <fil>` + `git commit -m "chore(ms-ai-architect): refresh KB $(basename <fil>) [skip-docs]"` med mindre `--single-commit` ble gitt
### 5. Single-commit modus
Hvis `--single-commit`: skip committer per fil, og lag én samlet commit til slutt:
```bash
git add skills/
git commit -m "chore(ms-ai-architect): refresh KB — N files [skip-docs]"
```
### 6. Push (om bruker bekrefter)
Spør: "Push til Forgejo origin/main? (y/n)". Per global push-policy er direkte main-push pre-autorisert, men spør likevel her siden dette er en bulk-operasjon.
```bash
git push origin main
```
### 7. Oppsummering
Rapporter:
- Antall filer oppdatert per prioritet
- Antall commits laget
- Hvis discovery kjørte: antall nye URLer oppdaget og lagt til registry
- Eventuelle filer som ble hoppet over (f.eks. ingen reelle endringer i hentet innhold)
- `data/change-report.json` blir værende på disk for diagnose
## Fallgruver
- **Sitemap-coverage:** ~69% av URLene matche mot sitemap. ~31% (mest `azure/ai-foundry/openai/`) finnes ikke pga. URL-restrukturering på Microsofts side. Disse rapporteres som "always stale" og må vurderes manuelt
- **Microsoft_docs_fetch latency:** hver fetch tar 2-5 sek. 9 critical + 44 high filer × ~1.5 kilder hver = ~80 fetches = ~3-7 minutter
- **Modellvalg:** Opus brukes fordi diff-resonnering + tekst-syntese krever nyanse. For enklere "just refresh dates"-oppdateringer er Sonnet tilstrekkelig — bruker kan overstyre med eksplisitt `--model claude-sonnet-4-6` i Claude Code config
- **MCP-tilgjengelighet:** kommandoen krever at `microsoft-learn` MCP-serveren er aktiv. Sjekk med `claude mcp list` ved første kjøring
## Når kjøre
- **Anbefalt:** ukentlig eller månedlig, avhengig av hvor sensitive prosjektene dine er for KB-ferskhet
- **Før viktig vurdering:** kjør med `--priorities critical,high,medium` før en stor `/architect:utredning` eller `/architect:adr`
- **Etter Microsoft-events:** Build, Ignite, eller annen større Microsoft-konferanse → forvent mange endringer
## Schedulering
Pluginen schedulerer **ingenting**. Hvis du vil ha periodisk kjøring, sett opp en cron-jobb / launchd-jobb / systemd timer / GitHub Actions-workflow som kjører `node scripts/kb-update/run-weekly-update.mjs --force --discover` (uten apply-fasen) og varsler deg om å kjøre `/architect:kb-update` i en interaktiv Claude Code-sesjon.
Apply-fasen (oppdatere filer + committe) kan ikke automatiseres innenfor denne pluginen — den krever LLM-resonnering på endringene og menneskelig vurdering, og er bevisst designet for kjøring fra en åpen Claude Code-sesjon.

View file

@ -6,7 +6,6 @@
import { readdirSync, readFileSync, existsSync } from 'node:fs';
import { join, relative } from 'node:path';
import { spawn } from 'node:child_process';
import { getCacheDir } from '../../scripts/kb-update/lib/cross-platform-paths.mjs';
const pluginRoot = process.env.CLAUDE_PLUGIN_ROOT || join(process.cwd());
const cwd = process.cwd();
@ -131,25 +130,6 @@ if (staleLevels.critical > 0) staleEntries.push(`${staleLevels.critical} critica
if (staleLevels.high > 0) staleEntries.push(`${staleLevels.high} high`);
if (staleLevels.medium > 0) staleEntries.push(`${staleLevels.medium} medium`);
// KB-update auto-cron status (written by scripts/kb-update/weekly-kb-cron.mjs).
// Surfaced BEFORE the staleness-poll block because cron failure is a higher-
// signal event (something the user actively configured stopped working) than
// the slower-moving "files are getting old" signal that follows.
try {
const kbStatusPath = join(getCacheDir('ms-ai-architect'), 'kb-update-status.json');
if (existsSync(kbStatusPath)) {
const kbStatus = JSON.parse(readFileSync(kbStatusPath, 'utf8'));
const surfaceStatuses = new Set(['failure', 'partial', 'budget_exceeded']);
if (kbStatus && surfaceStatuses.has(kbStatus.last_run_status)) {
parts.push(
`KB-update: ${kbStatus.last_run_status} (${kbStatus.last_run_ts}, log: ${kbStatus.log_file})`
);
}
}
} catch {
// Never block session start — silent on read or parse failure.
}
if (staleEntries.length > 0) {
const pollAge = lastPollDaysAgo < Infinity ? ` (pollet ${Math.floor(lastPollDaysAgo)}d siden)` : '';
parts.push(`KB: ${staleEntries.join(', ')} needs update${pollAge}`);

View file

@ -797,7 +797,7 @@
// COMMAND CATALOG (Step 4)
// ============================================================
//
// Kanonisk single-source-of-truth for alle 24 commands. Driver:
// Kanonisk single-source-of-truth for alle 25 commands. Driver:
// - Step 5/8: skjema-render via input_fields[]
// - Step 9: katalog-UI gruppert på category
// - Step 11: parser-routing via report_archetype
@ -1335,6 +1335,25 @@
input_fields: [
{ id: 'file_path', label: 'Filsti til markdown', type: 'text', from: 'local' }
]
},
{
id: 'kb-update',
category: 'tool',
label: 'KB-refresh (manuell)',
description: 'Poll Microsoft Learn-sitemaps, sammenligne mot lokale Last updated-headere, oppdatere endrede filer via microsoft_docs_fetch og committe. Schedulering er brukerens valg — pluginen schedulerer ingenting.',
argument_hint: '[--skip-discover] [--priorities critical,high,medium,low] [--dry-run] [--single-commit]',
calls_agent: null,
kb_files: [],
produces_report: false,
report_archetype: null,
report_root_class: null,
renderer: null,
input_fields: [
{ id: 'priorities', label: 'Prioriteter', type: 'select', from: 'local', options: ['critical,high', 'critical', 'critical,high,medium', 'critical,high,medium,low'] },
{ id: 'skip_discover', label: 'Hopp over discovery av nye URLer', type: 'boolean', from: 'local' },
{ id: 'dry_run', label: 'Dry-run (rapport uten apply)', type: 'boolean', from: 'local' },
{ id: 'single_commit', label: 'Samle alt i én commit', type: 'boolean', from: 'local' }
]
}
]
};
@ -1766,7 +1785,7 @@
'<button type="button" class="tracks__card tracks__card--expert" data-action="goto-catalog">' +
'<span class="tracks__card-icon" aria-hidden="true"></span>' +
'<h3 class="tracks__card-title">Command-katalog</h3>' +
'<p class="tracks__card-desc">Bla i alle 24 commands gruppert på kategori. Generer pipeline-strenger uten et prosjekt.</p>' +
'<p class="tracks__card-desc">Bla i alle 25 commands gruppert på kategori. Generer pipeline-strenger uten et prosjekt.</p>' +
'<span class="tracks__card-meta"><span>' + CATALOG.commands.length + ' commands</span><span class="tracks__card-cta">Bla →</span></span>' +
'</button>' +
'</div>'
@ -2267,7 +2286,7 @@
// CATALOG SURFACE (Step 9)
// ============================================================
//
// 24 commands gruppert i 5 .expansion-grupper (CATALOG.categories) med
// 25 commands gruppert i 5 .expansion-grupper (CATALOG.categories) med
// søke-input som filtrerer på id+label+description+argument_hint.
// Hver kategori-expansion rendrer en .catalog-cards-grid med kort.
// "Åpne skjema" på et kort åpner renderCommandForm() i modal.

View file

@ -1,501 +0,0 @@
#!/usr/bin/env node
// install-kb-cron.mjs — Standalone cross-OS install helper for the weekly
// KB-update cron job. Reads the appropriate template from
// scripts/kb-update/templates/, fills {{NODE_BIN}}, {{PLUGIN_ROOT}},
// {{LOG_FILE}}, {{SCHEDULE_HOUR/MINUTE/DAY_OF_WEEK}} placeholders, writes
// to the platform-specific scheduler dir, and registers the job with the
// host scheduler.
//
// macOS → ~/Library/LaunchAgents/com.fromaitochitta.ms-ai-architect.kb-update.plist
// launchctl bootstrap gui/<uid> <path> (EIO fallback to load -w)
// Linux → ~/.config/systemd/user/{ms-ai-architect-kb-update.service, .timer}
// systemctl --user daemon-reload && enable --now ms-ai-architect-kb-update.timer
// Windows → invoke ms-ai-architect-kb-update.ps1 via
// powershell -ExecutionPolicy Bypass -File <path>
// (template registers via Register-ScheduledTask itself).
// Marked beta — not validated against a real Windows machine.
//
// Usage:
// install-kb-cron.mjs [--print-only] [--target macos|linux|windows]
// [--uninstall [--purge]]
// [--node-bin <path>] [--claude-bin <path>]
// [--schedule "M H * * D"] default: "23 4 * * 3" (Wed 04:23)
//
// --print-only renders the filled template to stdout and exits without
// touching the filesystem (no scheduler dirs, no log dirs, nothing under
// HOME). Use it for inspection or for cross-target rendering on a
// developer machine.
import { spawnSync } from 'node:child_process';
import {
readFileSync,
writeFileSync,
mkdirSync,
existsSync,
unlinkSync,
rmSync,
} from 'node:fs';
import { homedir, platform as osPlatform } from 'node:os';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const PLUGIN_ROOT = join(__dirname, '..');
const TEMPLATES_DIR = join(__dirname, 'kb-update', 'templates');
const APP = 'ms-ai-architect';
const LAUNCHD_LABEL = 'com.fromaitochitta.ms-ai-architect.kb-update';
const SYSTEMD_UNIT = 'ms-ai-architect-kb-update';
const WIN_TASK_NAME = 'ms-ai-architect-kb-update';
// ---------- CLI parsing ----------
function printUsage() {
console.log(`Usage: install-kb-cron.mjs [options]
Installs (or removes) the weekly Microsoft Learn KB-update job for the
ms-ai-architect plugin on the host scheduler.
Options:
--print-only Print filled template to stdout, no file writes
--target <os> Target OS: macos|linux|windows (default: auto-detect)
--uninstall Reverse the registration (idempotent)
--purge With --uninstall: also delete logs/status/.kb-backup
--node-bin <path> Path to node binary (default: process.execPath)
--claude-bin <path> Path to claude binary (default: 'claude' on PATH)
--schedule "M H * * D" Cron expression (default: "23 4 * * 3" = Wed 04:23)
--help, -h Show this message and exit
`);
}
function parseArgs(argv) {
const args = {
printOnly: false,
target: null,
uninstall: false,
purge: false,
nodeBin: null,
claudeBin: null,
schedule: '23 4 * * 3',
};
for (let i = 0; i < argv.length; i++) {
const a = argv[i];
const eq = (name) => a.startsWith(`${name}=`) ? a.slice(name.length + 1) : null;
if (a === '--print-only') args.printOnly = true;
else if (a === '--uninstall') args.uninstall = true;
else if (a === '--purge') args.purge = true;
else if (a === '--help' || a === '-h') { printUsage(); process.exit(0); }
else if (a === '--target') args.target = argv[++i];
else if (eq('--target') !== null) args.target = eq('--target');
else if (a === '--node-bin') args.nodeBin = argv[++i];
else if (eq('--node-bin') !== null) args.nodeBin = eq('--node-bin');
else if (a === '--claude-bin') args.claudeBin = argv[++i];
else if (eq('--claude-bin') !== null) args.claudeBin = eq('--claude-bin');
else if (a === '--schedule') args.schedule = argv[++i];
else if (eq('--schedule') !== null) args.schedule = eq('--schedule');
else {
console.error(`Unknown argument: ${a}`);
console.error('Run with --help to see usage.');
process.exit(2);
}
}
return args;
}
// ---------- Target detection ----------
function detectHostTarget() {
switch (process.platform) {
case 'darwin': return 'macos';
case 'linux': return 'linux';
case 'win32': return 'windows';
default: return null;
}
}
const VALID_TARGETS = new Set(['macos', 'linux', 'windows']);
// ---------- Schedule parsing ----------
function parseSchedule(expr) {
if (typeof expr !== 'string') {
throw new Error(`invalid schedule: not a string`);
}
const parts = expr.trim().split(/\s+/);
if (parts.length !== 5) {
throw new Error(
`invalid schedule "${expr}" — expected 5 cron fields "M H * * D"`,
);
}
const [m, h, dom, mon, dow] = parts;
if (dom !== '*' || mon !== '*') {
throw new Error(
`invalid schedule "${expr}" — day-of-month and month must be "*"`,
);
}
const minute = Number(m);
const hour = Number(h);
const dayOfWeek = Number(dow);
if (!Number.isInteger(minute) || minute < 0 || minute > 59) {
throw new Error(`invalid schedule minute "${m}" (expected 0-59)`);
}
if (!Number.isInteger(hour) || hour < 0 || hour > 23) {
throw new Error(`invalid schedule hour "${h}" (expected 0-23)`);
}
if (!Number.isInteger(dayOfWeek) || dayOfWeek < 0 || dayOfWeek > 6) {
throw new Error(`invalid schedule day-of-week "${dow}" (expected 0-6)`);
}
return { minute, hour, dayOfWeek };
}
// ---------- Path resolution (no side effects) ----------
function resolveAppPaths(target) {
const h = homedir();
if (target === 'macos') {
return {
logFile: join(h, 'Library', 'Logs', APP, 'kb-update.log'),
logDir: join(h, 'Library', 'Logs', APP),
cacheDir: join(h, 'Library', 'Caches', APP),
};
}
if (target === 'windows') {
const lad = process.env.LOCALAPPDATA || join(h, 'AppData', 'Local');
return {
logFile: join(lad, APP, 'Logs', 'kb-update.log'),
logDir: join(lad, APP, 'Logs'),
cacheDir: join(lad, APP, 'Cache'),
};
}
// linux
const xdgState = process.env.XDG_STATE_HOME || join(h, '.local', 'state');
const xdgCache = process.env.XDG_CACHE_HOME || join(h, '.cache');
return {
logFile: join(xdgState, APP, 'logs', 'kb-update.log'),
logDir: join(xdgState, APP, 'logs'),
cacheDir: join(xdgCache, APP),
};
}
// ---------- Template substitution ----------
function fillTemplate(content, vars) {
return content
.replace(/\{\{NODE_BIN\}\}/g, vars.nodeBin)
.replace(/\{\{PLUGIN_ROOT\}\}/g, vars.pluginRoot)
.replace(/\{\{LOG_FILE\}\}/g, vars.logFile)
.replace(/\{\{SCHEDULE_HOUR\}\}/g, String(vars.hour))
.replace(/\{\{SCHEDULE_MINUTE\}\}/g, String(vars.minute))
.replace(/\{\{SCHEDULE_DAY_OF_WEEK\}\}/g, String(vars.dayOfWeek));
}
function readTemplate(name) {
return readFileSync(join(TEMPLATES_DIR, name), 'utf8');
}
// ---------- MCP / WSL detection ----------
function checkMcpServer() {
const claudeJson = join(homedir(), '.claude.json');
if (!existsSync(claudeJson)) return null;
try {
const data = JSON.parse(readFileSync(claudeJson, 'utf8'));
const mcp = data && data.mcpServers ? data.mcpServers : {};
return Boolean(mcp['microsoft-learn']);
} catch {
return null;
}
}
function isWsl() {
if (process.platform !== 'linux') return false;
try {
const v = readFileSync('/proc/version', 'utf8');
return /Microsoft|WSL/i.test(v);
} catch {
return false;
}
}
// ---------- Per-target install ----------
function installMacos(args, vars) {
const filled = fillTemplate(
readTemplate(`${LAUNCHD_LABEL}.plist`),
vars,
);
if (args.printOnly) {
process.stdout.write(filled);
return;
}
const dest = join(homedir(), 'Library', 'LaunchAgents', `${LAUNCHD_LABEL}.plist`);
mkdirSync(dirname(dest), { recursive: true });
mkdirSync(vars.logDir, { recursive: true });
writeFileSync(dest, filled, 'utf8');
// launchctl bootstrap gui/<uid> <path> (with load -w fallback on error).
if (typeof process.getuid === 'function') {
const uid = process.getuid();
const r = spawnSync('launchctl', ['bootstrap', `gui/${uid}`, dest], {
encoding: 'utf8',
});
if (r.status !== 0) {
console.error(`launchctl bootstrap returned ${r.status}: ${r.stderr.trim()}`);
console.error('Falling back to: launchctl load -w');
const r2 = spawnSync('launchctl', ['load', '-w', dest], { encoding: 'utf8' });
if (r2.status !== 0) {
throw new Error(`launchctl load -w failed: ${r2.stderr.trim()}`);
}
}
}
console.log(`✓ Installed ${dest}`);
console.log('Next steps:');
console.log(` - Verify: launchctl list | grep ${LAUNCHD_LABEL}`);
console.log(` - Logs: ${vars.logFile}`);
}
function installLinux(args, vars) {
const serviceFilled = fillTemplate(readTemplate(`${SYSTEMD_UNIT}.service`), vars);
const timerFilled = fillTemplate(readTemplate(`${SYSTEMD_UNIT}.timer`), vars);
if (args.printOnly) {
process.stdout.write(`# === ${SYSTEMD_UNIT}.service ===\n`);
process.stdout.write(serviceFilled);
process.stdout.write(`\n# === ${SYSTEMD_UNIT}.timer ===\n`);
process.stdout.write(timerFilled);
return;
}
// Pre-check: is systemd present? (skip if --target linux on non-linux host —
// user is likely cross-rendering by mistake; refuse to actually install.)
if (process.platform !== 'linux') {
throw new Error(
`cannot install systemd units on host platform "${process.platform}" — ` +
`use --print-only to render templates for cross-OS inspection`,
);
}
const sys = spawnSync('systemctl', ['is-system-running'], { encoding: 'utf8' });
if (sys.status === null || sys.error) {
throw new Error('systemctl not available — systemd is not running on this system');
}
if (isWsl()) {
console.warn('WARNING: detected WSL — systemd --user may need manual setup.');
}
const userDir = join(homedir(), '.config', 'systemd', 'user');
mkdirSync(userDir, { recursive: true });
mkdirSync(vars.logDir, { recursive: true });
const servicePath = join(userDir, `${SYSTEMD_UNIT}.service`);
const timerPath = join(userDir, `${SYSTEMD_UNIT}.timer`);
writeFileSync(servicePath, serviceFilled, 'utf8');
writeFileSync(timerPath, timerFilled, 'utf8');
spawnSync('systemctl', ['--user', 'daemon-reload'], { stdio: 'inherit' });
const en = spawnSync(
'systemctl',
['--user', 'enable', '--now', `${SYSTEMD_UNIT}.timer`],
{ stdio: 'inherit' },
);
if (en.status !== 0) {
throw new Error(`systemctl --user enable --now ${SYSTEMD_UNIT}.timer failed`);
}
console.log(`✓ Installed ${servicePath}`);
console.log(`✓ Installed ${timerPath}`);
console.log('Next steps:');
console.log(` - Verify: systemctl --user list-timers | grep ${SYSTEMD_UNIT}`);
console.log(` - Optional: sudo loginctl enable-linger $USER (autostart without login)`);
console.log(` - Logs: ${vars.logFile}`);
}
function installWindows(args, vars) {
const filled = fillTemplate(readTemplate(`${SYSTEMD_UNIT}.ps1`), vars);
if (args.printOnly) {
process.stdout.write(filled);
return;
}
if (process.platform !== 'win32') {
throw new Error(
`cannot register Windows scheduled task on host platform "${process.platform}" — ` +
`use --print-only to render the .ps1 for cross-OS inspection`,
);
}
console.warn(
'NOTE: Windows install path is BETA — not validated against a real Windows machine.',
);
// Materialize the .ps1 in the cache dir, then invoke it.
mkdirSync(vars.cacheDir, { recursive: true });
mkdirSync(vars.logDir, { recursive: true });
const ps1Path = join(vars.cacheDir, 'install-kb-cron.ps1');
writeFileSync(ps1Path, filled, 'utf8');
const r = spawnSync(
'powershell',
['-ExecutionPolicy', 'Bypass', '-File', ps1Path],
{ stdio: 'inherit' },
);
if (r.status !== 0) {
throw new Error(`powershell install failed (status=${r.status})`);
}
console.log(`✓ Registered Windows task '${WIN_TASK_NAME}'`);
console.log('Next steps:');
console.log(` - Verify: schtasks /Query /TN ${WIN_TASK_NAME}`);
console.log(` - Logs: ${vars.logFile}`);
}
// ---------- Per-target uninstall ----------
function uninstallMacos(host) {
const dest = join(homedir(), 'Library', 'LaunchAgents', `${LAUNCHD_LABEL}.plist`);
if (!existsSync(dest)) {
console.log(`(nothing to remove at ${dest})`);
return;
}
if (host === 'macos' && typeof process.getuid === 'function') {
const uid = process.getuid();
spawnSync('launchctl', ['bootout', `gui/${uid}`, dest], { encoding: 'utf8' });
spawnSync('launchctl', ['unload', '-w', dest], { encoding: 'utf8' });
}
unlinkSync(dest);
console.log(`✓ Removed ${dest}`);
}
function uninstallLinux(host) {
const userDir = join(homedir(), '.config', 'systemd', 'user');
const servicePath = join(userDir, `${SYSTEMD_UNIT}.service`);
const timerPath = join(userDir, `${SYSTEMD_UNIT}.timer`);
const anyExists = existsSync(servicePath) || existsSync(timerPath);
if (!anyExists) {
console.log(`(nothing to remove at ${userDir})`);
return;
}
if (host === 'linux') {
spawnSync(
'systemctl',
['--user', 'disable', '--now', `${SYSTEMD_UNIT}.timer`],
{ encoding: 'utf8' },
);
}
for (const p of [servicePath, timerPath]) {
if (existsSync(p)) {
unlinkSync(p);
console.log(`✓ Removed ${p}`);
}
}
if (host === 'linux') {
spawnSync('systemctl', ['--user', 'daemon-reload'], { encoding: 'utf8' });
}
}
function uninstallWindows(host) {
if (host === 'windows') {
const r = spawnSync(
'schtasks',
['/Delete', '/TN', WIN_TASK_NAME, '/F'],
{ encoding: 'utf8' },
);
if (r.status === 0) {
console.log(`✓ Removed Windows task '${WIN_TASK_NAME}'`);
} else {
console.log(`(nothing to remove — task '${WIN_TASK_NAME}' not registered)`);
}
} else {
console.log(`(nothing to remove — schtasks unavailable on host '${host}')`);
}
}
function purgeAppFiles(target) {
const paths = resolveAppPaths(target);
const backupDir = join(PLUGIN_ROOT, '.kb-backup');
let removed = 0;
for (const d of [paths.logDir, paths.cacheDir, backupDir]) {
if (existsSync(d)) {
rmSync(d, { recursive: true, force: true });
console.log(`✓ Purged ${d}`);
removed++;
}
}
if (removed === 0) {
console.log('(nothing to purge)');
}
}
// ---------- Main ----------
function main() {
const args = parseArgs(process.argv.slice(2));
if (!args.target) args.target = detectHostTarget();
if (!args.target || !VALID_TARGETS.has(args.target)) {
console.error(
`unsupported or invalid --target "${args.target}" — must be one of: macos, linux, windows`,
);
process.exit(2);
}
// Resolve binaries.
const nodeBin = args.nodeBin || process.execPath;
// claudeBin is only consumed by the templates that reference it; current
// template set assumes 'claude' is on PATH inside the cron environment, so
// the override is currently a no-op pass-through. Accept the flag so future
// template revisions can pick it up without a CLI break.
void args.claudeBin;
// Schedule parsing.
let sched;
try {
sched = parseSchedule(args.schedule);
} catch (err) {
console.error(`ERROR: ${err.message}`);
process.exit(2);
}
const paths = resolveAppPaths(args.target);
const vars = {
nodeBin,
pluginRoot: PLUGIN_ROOT,
logFile: paths.logFile,
logDir: paths.logDir,
cacheDir: paths.cacheDir,
minute: sched.minute,
hour: sched.hour,
dayOfWeek: sched.dayOfWeek,
};
// Uninstall path.
if (args.uninstall) {
const host = detectHostTarget();
if (args.target === 'macos') uninstallMacos(host);
if (args.target === 'linux') uninstallLinux(host);
if (args.target === 'windows') uninstallWindows(host);
if (args.purge) purgeAppFiles(args.target);
return;
}
// MCP soft-warn (install path only).
if (!args.printOnly) {
const mcp = checkMcpServer();
if (mcp === false) {
console.warn(
'WARNING: ~/.claude.json has no `microsoft-learn` MCP server entry. ' +
'KB-updates will run but the agent will lack live Microsoft Learn access.',
);
} else if (mcp === null) {
console.warn('WARNING: could not read ~/.claude.json — skipping MCP server check.');
}
}
if (args.target === 'macos') installMacos(args, vars);
if (args.target === 'linux') installLinux(args, vars);
if (args.target === 'windows') installWindows(args, vars);
}
try {
main();
} catch (err) {
console.error(`ERROR: ${err.message}`);
process.exit(1);
}

View file

@ -1,99 +0,0 @@
// auth-mode.mjs — Detect and validate Claude auth mode for cron-safe runs.
// Zero dependencies. The detector and validator are pure-testable: both
// `runner` (claude CLI invoker) and `claudeJsonPath` (~/.claude.json) are
// dependency-injected so tests stub them rather than spawning real subprocess
// or touching the user's home directory.
//
// Subscription browser-OAuth tokens expire ~15h and are architecturally
// incompatible with cron. This lib surfaces that case as a hard fail so the
// installer/cron-runner can refuse to proceed.
import { readFileSync } from 'node:fs';
import { homedir } from 'node:os';
import { join } from 'node:path';
import { execFileSync } from 'node:child_process';
/**
* Default subprocess runner invokes a command and returns its exit code.
* Returns 0 on success, the actual exit code on failure, 127 on spawn error.
*/
function defaultRunner(cmd, args) {
try {
execFileSync(cmd, args, { stdio: 'ignore' });
return 0;
} catch (err) {
if (typeof err.status === 'number') return err.status;
return 127;
}
}
/**
* Safely read and parse a Claude config JSON file. Returns null on any error.
* @param {string} path
* @returns {object|null}
*/
export function readClaudeJson(path) {
try {
const text = readFileSync(path, 'utf8');
const obj = JSON.parse(text);
return obj && typeof obj === 'object' ? obj : null;
} catch {
return null;
}
}
/**
* Detect the active Claude authentication mode.
*
* Resolution order:
* 1. ANTHROPIC_API_KEY env-var 'api-key'
* 2. CLAUDE_CODE_OAUTH_TOKEN env-var 'long-oauth'
* 3. ~/.claude.json onboarded + `claude auth status` exits 0 'subscription-browser-only'
* 4. otherwise 'unauthenticated'
*
* @param {object} [opts]
* @param {(cmd: string, args: string[]) => number} [opts.runner]
* @param {string} [opts.claudeJsonPath]
* @param {object} [opts.env] defaults to process.env
* @returns {'api-key'|'long-oauth'|'subscription-browser-only'|'unauthenticated'}
*/
export function detectAuthMode(opts = {}) {
const env = opts.env ?? process.env;
const runner = opts.runner ?? defaultRunner;
const claudeJsonPath = opts.claudeJsonPath ?? join(homedir(), '.claude.json');
if (env.ANTHROPIC_API_KEY && env.ANTHROPIC_API_KEY.trim() !== '') {
return 'api-key';
}
if (env.CLAUDE_CODE_OAUTH_TOKEN && env.CLAUDE_CODE_OAUTH_TOKEN.trim() !== '') {
return 'long-oauth';
}
const claudeJson = readClaudeJson(claudeJsonPath);
if (!claudeJson || claudeJson.hasCompletedOnboarding !== true) {
return 'unauthenticated';
}
const exitCode = runner('claude', ['auth', 'status']);
return exitCode === 0 ? 'subscription-browser-only' : 'unauthenticated';
}
/**
* Throw a clear error if the detected mode is incompatible with cron.
* Subscription-browser-only OAuth dies after ~15h; unauthenticated has no
* credential. Both must be rejected before headless cron runs.
*
* @param {string} mode
* @throws {Error} with code 'EAUTHCRON' if mode is not safe for cron
*/
export function validateAuthForCron(mode) {
if (mode === 'api-key' || mode === 'long-oauth') return;
const e = new Error(
`Auth mode "${mode}" is not safe for cron. ` +
'Run `claude setup-token` to generate a long-lived OAuth, ' +
'or set ANTHROPIC_API_KEY in the cron environment.'
);
e.code = 'EAUTHCRON';
e.detectedMode = mode;
throw e;
}

View file

@ -1,36 +0,0 @@
// cost-estimat.mjs — Heuristic cost-estimate for KB-update runs.
// Pure function. Auth-mode-aware: api-key returns numeric USD,
// subscription modes return null USD + kvote_warn flag.
// Zero dependencies.
const AVG_INPUT_TOKENS_PER_FILE = 3000;
const AVG_OUTPUT_TOKENS_PER_FILE = 1500;
const SONNET_INPUT_USD_PER_M = 3.0;
const SONNET_OUTPUT_USD_PER_M = 15.0;
const SUBSCRIPTION_MODES = new Set(['long-oauth', 'subscription-browser-only']);
/**
* Estimate cost (and quota-warn flag) for a run of N files at given priorities.
* Filters to critical + high only (medium/low excluded per brief).
*
* @param {object} priorities { critical, high, medium, low } file counts
* @param {object} [opts]
* @param {string} [opts.authMode] 'api-key' | 'long-oauth' | 'subscription-browser-only' | 'unauthenticated'
* @returns {{tokens_input: number, tokens_output: number, usd: number|null, kvote_warn: boolean}}
*/
export function estimateCost(priorities = {}, opts = {}) {
const authMode = opts.authMode ?? 'api-key';
const fileCount = (priorities.critical ?? 0) + (priorities.high ?? 0);
const tokens_input = fileCount * AVG_INPUT_TOKENS_PER_FILE;
const tokens_output = fileCount * AVG_OUTPUT_TOKENS_PER_FILE;
if (SUBSCRIPTION_MODES.has(authMode)) {
return { tokens_input, tokens_output, usd: null, kvote_warn: true };
}
const usd =
(tokens_input / 1_000_000) * SONNET_INPUT_USD_PER_M +
(tokens_output / 1_000_000) * SONNET_OUTPUT_USD_PER_M;
return { tokens_input, tokens_output, usd, kvote_warn: false };
}

View file

@ -1,166 +0,0 @@
// lock-file.mjs — Exclusive lock with PID + mtime stale-detection.
// Zero dependencies. Uses fs.writeFileSync('wx') for atomic exclusive create.
// Stale-detection is OR-based: stale if PID is dead OR mtime exceeds threshold.
// Either condition alone is enough to break the lock — handles SIGKILL orphans
// (mtime alone) and PID-reuse races (mtime alone) and crashed-then-replaced
// runs (PID alone). Long runs may opt-in to mtime refresh via refreshIntervalMs.
import { writeFileSync, readFileSync, statSync, unlinkSync, utimesSync } from 'node:fs';
import { hostname } from 'node:os';
import { join } from 'node:path';
import { getCacheDir } from './cross-platform-paths.mjs';
const DEFAULT_STALE_THRESHOLD_MS = 60 * 60 * 1000; // 1 hour
const DEFAULT_LOCK_NAME = 'kb-update.lock';
/**
* Check whether a PID identifies a live process.
* @param {number} pid POSIX process id
* @returns {boolean}
*/
export function isPidAlive(pid) {
if (typeof pid !== 'number' || !Number.isFinite(pid) || pid <= 0) {
return false;
}
try {
process.kill(pid, 0);
return true;
} catch (err) {
// EPERM means the process exists but we lack signal permission — still alive.
return err && err.code === 'EPERM';
}
}
function safeReadLock(lockPath) {
try {
return JSON.parse(readFileSync(lockPath, 'utf8'));
} catch {
return null;
}
}
function lockMtimeMs(lockPath) {
try {
return statSync(lockPath).mtimeMs;
} catch {
return null;
}
}
function writeLockFile(lockPath) {
writeFileSync(
lockPath,
JSON.stringify({
pid: process.pid,
started: Date.now(),
host: hostname(),
version: 1,
}),
{ flag: 'wx', encoding: 'utf8' }
);
}
/**
* Acquire an exclusive lock. Throws ELOCKED if held by a live, fresh holder.
* Cleans up stale locks (dead PID OR mtime older than staleThresholdMs).
*
* @param {string} [lockPath] absolute lock-file path; defaults to <cache>/kb-update.lock
* @param {object} [opts]
* @param {number} [opts.staleThresholdMs] default 3600000 (1h)
* @param {number} [opts.refreshIntervalMs] if > 0, periodically utimes the lock
* @param {boolean} [opts.registerCleanup] default true; install exit/signal handlers
* @returns {{lockPath: string, release: () => void}}
*/
export function acquireLock(lockPath, opts = {}) {
const staleThresholdMs = opts.staleThresholdMs ?? DEFAULT_STALE_THRESHOLD_MS;
const refreshIntervalMs = opts.refreshIntervalMs ?? 0;
const registerCleanup = opts.registerCleanup ?? true;
const path = lockPath || join(getCacheDir('ms-ai-architect'), DEFAULT_LOCK_NAME);
try {
writeLockFile(path);
} catch (err) {
if (!err || err.code !== 'EEXIST') throw err;
const data = safeReadLock(path);
const mtime = lockMtimeMs(path);
const holderPid = typeof data?.pid === 'number' ? data.pid : null;
const pidAlive = holderPid != null ? isPidAlive(holderPid) : false;
const ageMs = mtime != null ? Date.now() - mtime : Infinity;
const stale = !pidAlive || ageMs > staleThresholdMs;
if (!stale) {
const e = new Error(
`Lock held by PID ${holderPid} (started ${data?.started ?? 'unknown'})`
);
e.code = 'ELOCKED';
e.holderPid = holderPid;
throw e;
}
try {
unlinkSync(path);
} catch {
// best-effort
}
writeLockFile(path); // retry once
}
let refreshTimer = null;
let released = false;
const release = () => {
if (released) return;
released = true;
if (refreshTimer) {
clearInterval(refreshTimer);
refreshTimer = null;
}
try {
const data = safeReadLock(path);
if (!data || data.pid === process.pid) {
unlinkSync(path);
}
} catch {
// best-effort
}
};
if (refreshIntervalMs > 0) {
refreshTimer = setInterval(() => {
try {
const now = new Date();
utimesSync(path, now, now);
} catch {
// best-effort
}
}, refreshIntervalMs);
if (typeof refreshTimer.unref === 'function') {
refreshTimer.unref();
}
}
if (registerCleanup) {
const onExit = () => release();
process.once('exit', onExit);
process.once('SIGINT', () => {
release();
process.exit(130);
});
process.once('SIGTERM', () => {
release();
process.exit(143);
});
process.once('SIGHUP', () => {
release();
process.exit(129);
});
process.once('uncaughtException', (err) => {
release();
console.error(err);
process.exit(1);
});
}
return { lockPath: path, release };
}

View file

@ -1,62 +0,0 @@
# ms-ai-architect KB-update scheduling templates
These templates are consumed by `scripts/install-kb-cron.mjs` (added in
Wave 4 / Step 11) which substitutes the documented placeholders and
hands off to the platform's native scheduler. Do not edit a generated
file directly — re-run the installer instead so the source-of-truth
stays in this directory.
## Files
| File | Platform | Scheduler |
|------|----------|-----------|
| `com.fromaitochitta.ms-ai-architect.kb-update.plist` | macOS (Intel + Apple Silicon) | `launchctl` (per-user LaunchAgent) |
| `ms-ai-architect-kb-update.service` | Linux | `systemctl --user` |
| `ms-ai-architect-kb-update.timer` | Linux | `systemctl --user` (paired with the .service) |
| `ms-ai-architect-kb-update.ps1` | Windows 10/11 | Task Scheduler via `Register-ScheduledTask` |
## Placeholders
All four templates share the same canonical placeholder set. The
installer fills them in at install-time and writes the rendered file
under the platform's scheduler directory.
| Placeholder | Filled with | Source |
|-------------|-------------|--------|
| `{{NODE_BIN}}` | Absolute path to the `node` binary that should run the cron | `which node` (POSIX) / `where node` (Windows) at install-time |
| `{{PLUGIN_ROOT}}` | Absolute path to the `plugins/ms-ai-architect/` directory | Resolved by the installer relative to itself |
| `{{LOG_FILE}}` | Absolute path to the rotated log file | `getLogDir('ms-ai-architect') + '/kb-update.log'` (per `lib/cross-platform-paths.mjs`) |
| `{{SCHEDULE_HOUR}}` | Cron-hour, 0-23 | Default `4`; overridable via `--schedule-hour` |
| `{{SCHEDULE_MINUTE}}` | Cron-minute, 0-59 | Default `23`; overridable via `--schedule-minute` |
| `{{SCHEDULE_DAY_OF_WEEK}}` | launchd Weekday integer (0=Sunday … 3=Wednesday) | Default `3` (Wednesday) |
The systemd `.timer` and Windows `.ps1` use a literal `Wed`/`Wednesday`
day name rather than `{{SCHEDULE_DAY_OF_WEEK}}` because their respective
schedulers expect day-name strings, and the installer currently locks
the day to Wednesday (per the brief's "weekly Wed" cadence). Changing
the day requires editing the template — the installer does not yet
expose a `--schedule-day` flag.
## Install / uninstall
The full install/uninstall flow is implemented by
`scripts/install-kb-cron.mjs` (Wave 4 / Step 11). Run with `--help` for
the current option set. The contract for all three platforms is "fires
while the user is logged in" — there is no system-wide / sudo install
path because Claude Code's keychain-bound auth dies in unattended
contexts.
## Why these specific schedulers
- **launchd** is the only first-class scheduler on macOS; cron is a
thin user-facing alias. `RunAtLoad` is `false` so loading the job at
boot does not trigger an immediate Claude Code session.
- **systemd `--user` units** keep the symmetry of "user-context only"
with launchd's LoginItem and Windows' `InteractiveToken`. The
`Persistent=true` setting on the timer ensures a missed run (laptop
asleep on Wednesday) fires on next boot rather than being skipped.
- **Windows Task Scheduler** with `InteractiveToken` is the only logon
type that keeps the keychain unlocked, which is required for
subscription-auth Claude Code sessions.
See `research/01-cross-os-scheduling.md` for the full background.

View file

@ -1,55 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<!--
launchd job for weekly ms-ai-architect KB-update.
Placeholders ({{NODE_BIN}}, {{PLUGIN_ROOT}}, {{LOG_FILE}}, {{SCHEDULE_HOUR}},
{{SCHEDULE_MINUTE}}, {{SCHEDULE_DAY_OF_WEEK}}) are filled in by
scripts/install-kb-cron.mjs at install-time.
RunAtLoad is intentionally false so loading the job at boot does not
immediately spawn a Claude Code session. Weekday=3 is Wednesday in
launchd's StartCalendarInterval semantics.
-->
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.fromaitochitta.ms-ai-architect.kb-update</string>
<key>ProgramArguments</key>
<array>
<string>{{NODE_BIN}}</string>
<string>{{PLUGIN_ROOT}}/scripts/kb-update/weekly-kb-cron.mjs</string>
</array>
<key>WorkingDirectory</key>
<string>{{PLUGIN_ROOT}}</string>
<key>StartCalendarInterval</key>
<dict>
<key>Weekday</key>
<integer>{{SCHEDULE_DAY_OF_WEEK}}</integer>
<key>Hour</key>
<integer>{{SCHEDULE_HOUR}}</integer>
<key>Minute</key>
<integer>{{SCHEDULE_MINUTE}}</integer>
</dict>
<key>RunAtLoad</key>
<false/>
<key>StandardOutPath</key>
<string>{{LOG_FILE}}</string>
<key>StandardErrorPath</key>
<string>{{LOG_FILE}}</string>
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>/usr/local/bin:/opt/homebrew/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
</dict>
<key>ProcessType</key>
<string>Background</string>
</dict>
</plist>

View file

@ -1,48 +0,0 @@
# ms-ai-architect-kb-update.ps1
# PowerShell installer fragment for Windows Task Scheduler. Filled in
# by scripts/install-kb-cron.mjs at install-time and run elevated only
# if the user requested system-wide install (default is per-user with
# InteractiveToken so the task fires while the user is logged in).
$TaskName = 'ms-ai-architect-kb-update'
$NodeBin = '{{NODE_BIN}}'
$PluginRoot = '{{PLUGIN_ROOT}}'
$LogFile = '{{LOG_FILE}}'
$ScheduleAt = '{{SCHEDULE_HOUR}}:{{SCHEDULE_MINUTE}}'
$Trigger = New-ScheduledTaskTrigger `
-Weekly `
-DaysOfWeek Wednesday `
-At $ScheduleAt
$Action = New-ScheduledTaskAction `
-Execute $NodeBin `
-Argument "$PluginRoot\scripts\kb-update\weekly-kb-cron.mjs" `
-WorkingDirectory $PluginRoot
# InteractiveToken is the contract: the task only runs while the user is
# logged in. This avoids the "OAuth dies in cron" failure-mode (claude
# subscription auth is bound to the keychain, which is unlocked only when
# the user is logged in). RunLevel Limited keeps the task at non-elevated
# privileges; admin elevation is unnecessary for per-user scheduling.
$Principal = New-ScheduledTaskPrincipal `
-UserId $env:USERNAME `
-LogonType InteractiveToken `
-RunLevel Limited
$Settings = New-ScheduledTaskSettingsSet `
-AllowStartIfOnBatteries `
-DontStopIfGoingOnBatteries `
-StartWhenAvailable `
-ExecutionTimeLimit (New-TimeSpan -Hours 2)
Register-ScheduledTask `
-TaskName $TaskName `
-Trigger $Trigger `
-Action $Action `
-Principal $Principal `
-Settings $Settings `
-Description 'Weekly Microsoft Learn KB freshness update for ms-ai-architect plugin' `
-Force | Out-Null
Write-Host "Registered Windows scheduled task '$TaskName' (weekly Wed $ScheduleAt, log: $LogFile)"

View file

@ -1,19 +0,0 @@
[Unit]
Description=ms-ai-architect weekly KB-update (Microsoft Learn freshness)
Documentation=file://{{PLUGIN_ROOT}}/scripts/kb-update/templates/README.md
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart={{NODE_BIN}} {{PLUGIN_ROOT}}/scripts/kb-update/weekly-kb-cron.mjs
WorkingDirectory={{PLUGIN_ROOT}}
StandardOutput=append:{{LOG_FILE}}
StandardError=append:{{LOG_FILE}}
Environment=PATH=/usr/local/bin:/usr/bin:/bin
# No User= here; the unit is installed under `systemctl --user` so it
# inherits the invoking user's identity. Running under the user manager
# keeps the contract "fires while user is logged in" symmetric across
# the three platforms (launchd LoginItem, systemd --user, Windows
# InteractiveToken). Switching to system-wide service+sudo would
# diverge from that contract — do not do that here.

View file

@ -1,13 +0,0 @@
[Unit]
Description=Weekly trigger for ms-ai-architect KB-update
[Timer]
# Default cadence per the brief is Wednesday 04:23 local time. Editing
# this file directly is fine for one-off schedule tweaks; for
# reproducible installs prefer re-running scripts/install-kb-cron.mjs.
OnCalendar=Wed *-*-* 04:23:00
Persistent=true
Unit=ms-ai-architect-kb-update.service
[Install]
WantedBy=timers.target

View file

@ -1,559 +0,0 @@
#!/usr/bin/env node
// weekly-kb-cron.mjs — Cross-OS scheduler entrypoint for weekly KB-update.
//
// Pipeline:
// 1. Parse flags (--dry-run, --force, --discover, --budget-usd=N).
// 2. Resolve cross-platform log/cache/state/backup dirs via lib/cross-platform-paths.mjs.
// 3. Rotate the log file before first write (lib/log-rotate.mjs, 10 MB default).
// 4. If --dry-run: print plan, write status (last_run_status: dry-run), exit 0.
// 5. Pre-flight: git --version, which claude, detectAuthMode + validateAuthForCron,
// ~/.claude.json onboarding flags, soft-warn on missing microsoft-learn MCP,
// git status --porcelain clean check.
// 6. Acquire lock (lib/lock-file.mjs). Capture runStartTs (Unix ms).
// 7. Run scripts/kb-update/run-weekly-update.mjs (existing pattern).
// 8. Read change-report.json. updateFiles = critical+high only.
// 9. Pre-flight cost-estimate (lib/cost-estimat.mjs). Abort with budget_exceeded
// if api-key auth and usd > budget. Subscription auth: kvote_warn, proceed.
// 10. Backup skills/ via lib/backup.mjs#backupDir.
// 11. Spawn Claude with NEW flag stack: dontAsk + scoped allowedTools +
// --output-format json + --model claude-sonnet-4-6.
// 12. Parse stdout JSON for total_cost_usd, session_id, max_turns_hit.
// 13. Post-run verification: git log --since=@<unixSeconds> commit count vs
// updateFiles.length. Branch: success / partial / failure.
// 14. On failure: rollback via backup#restore. On partial: keep commits.
// On success: optionally git push (auto_push_eligible).
// 15. Cleanup: release lock, cleanupOldBackups.
// 16. Exit 0 on success / dry-run / partial; 1 on failure / budget_exceeded.
//
// Status file: <getCacheDir('ms-ai-architect')>/kb-update-status.json
// (rewritten atomically per Status File Schema in plan.md L122-153).
//
// Crontab one-liner is still supported for direct cron use, but the recommended
// install path is `node ../install-kb-cron.mjs` which generates a launchd plist
// (macOS), systemd .timer + .service (Linux), or Windows Task Scheduler entry.
import { execFileSync, spawnSync } from 'node:child_process';
import { readFileSync, existsSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import { homedir, platform as osPlatform } from 'node:os';
import { getCacheDir, getLogDir, getBackupDir } from './lib/cross-platform-paths.mjs';
import { atomicWriteJson } from './lib/atomic-write.mjs';
import { rotateLog } from './lib/log-rotate.mjs';
import { detectAuthMode, validateAuthForCron, readClaudeJson } from './lib/auth-mode.mjs';
import { acquireLock } from './lib/lock-file.mjs';
import { estimateCost } from './lib/cost-estimat.mjs';
import { backupDir, cleanupOldBackups } from './lib/backup.mjs';
const __dirname = dirname(fileURLToPath(import.meta.url));
const APP = 'ms-ai-architect';
const PLUGIN_ROOT = join(__dirname, '..', '..');
const DATA_DIR = join(__dirname, 'data');
const SKILLS_DIR = join(PLUGIN_ROOT, 'skills');
const DEFAULT_BUDGET_USD = 5;
const KB_BACKUP_DAYS = 7;
// ---------- Arg parsing ----------
function parseArgs(argv) {
const args = {
dryRun: false,
force: false,
discover: true, // run-weekly-update default
budgetUsd: Number(process.env.KB_UPDATE_BUDGET_USD) || DEFAULT_BUDGET_USD,
};
for (const a of argv) {
if (a === '--dry-run') args.dryRun = true;
else if (a === '--force') args.force = true;
else if (a === '--no-discover') args.discover = false;
else if (a.startsWith('--budget-usd=')) {
const n = Number(a.slice('--budget-usd='.length));
if (Number.isFinite(n) && n > 0) args.budgetUsd = n;
}
}
return args;
}
const ARGS = parseArgs(process.argv.slice(2));
// ---------- Logging ----------
function fsTimestamp(date = new Date()) {
// ISO timestamp made filesystem-safe (colons → dashes; macOS+Windows reject ':' in filenames).
return date.toISOString().replace(/:/g, '-');
}
const FS_TS = fsTimestamp();
const LOG_DIR = getLogDir(APP);
const LOG_FILE = join(LOG_DIR, `kb-update-${FS_TS}.log`);
// Rotate the *active* log if it exists and exceeds the size cap, BEFORE the
// first write of this run. Per-run log files (timestamped) won't actually
// overflow during a single run, but rotateLog also tolerates missing files.
rotateLog(LOG_FILE, { maxSizeBytes: 10 * 1024 * 1024, maxGenerations: 5 });
function log(msg) {
const ts = new Date().toISOString();
console.log(`[${ts}] ${msg}`);
}
// ---------- Status file ----------
const CACHE_DIR = getCacheDir(APP);
const STATUS_FILE = join(CACHE_DIR, 'kb-update-status.json');
function writeStatus(partial) {
const base = {
schema_version: 1,
last_run_status: 'unknown',
last_run_ts: new Date().toISOString(),
duration_seconds: null,
auth_mode: 'unauthenticated',
log_file: LOG_FILE,
files_planned: null,
files_committed: null,
session_id: null,
total_cost_usd: null,
tokens_input: null,
tokens_output: null,
max_turns_hit: false,
diagnostic: null,
};
atomicWriteJson(STATUS_FILE, { ...base, ...partial });
}
// ---------- Dry-run early exit ----------
if (ARGS.dryRun) {
log('=== DRY RUN — Weekly KB Cron ===');
log(`Plugin root: ${PLUGIN_ROOT}`);
log(`Log file: ${LOG_FILE}`);
log(`Status file: ${STATUS_FILE}`);
log(`Budget cap: $${ARGS.budgetUsd.toFixed(2)} USD (api-key auth only)`);
log('Pipeline plan (would execute):');
log(' 1. run-weekly-update.mjs --force' + (ARGS.discover ? ' --discover' : ''));
log(' 2. read change-report.json → critical + high files');
log(' 3. cost-estimate via lib/cost-estimat.mjs');
log(' 4. backup skills/ → .kb-backup/<ts>/');
log(' 5. spawn claude -p with --permission-mode dontAsk + scoped allowedTools');
log(' 6. post-run verify: git log --since=@<runStart> commit count');
log(' 7. branch on status: success / partial / failure / budget_exceeded');
if (existsSync(join(DATA_DIR, 'change-report.json'))) {
try {
const rp = JSON.parse(readFileSync(join(DATA_DIR, 'change-report.json'), 'utf8'));
const c = rp?.by_priority?.critical ?? 0;
const h = rp?.by_priority?.high ?? 0;
log(`Current change-report: ${c} critical + ${h} high (would be planned)`);
} catch {
log('Current change-report: (unreadable)');
}
} else {
log('Current change-report: (none — would be generated by run-weekly-update.mjs)');
}
// Auth-mode is lazy in dry-run: detect but never validate so a dev can
// sanity-check the plan without having a cron-safe credential set up yet.
let mode = 'unauthenticated';
try {
mode = detectAuthMode();
} catch {
// detectAuthMode shouldn't throw, but be defensive.
}
writeStatus({
last_run_status: 'dry-run',
auth_mode: mode,
diagnostic: null,
});
log('=== DRY RUN COMPLETE ===');
process.exit(0);
}
// ---------- Pre-flight ----------
function which(cmd) {
const finder = osPlatform() === 'win32' ? 'where' : 'which';
try {
const out = execFileSync(finder, [cmd], { encoding: 'utf8', stdio: ['ignore', 'pipe', 'ignore'] });
return out.split(/\r?\n/)[0].trim() || null;
} catch {
return null;
}
}
function preflight() {
// git --version
try {
execFileSync('git', ['--version'], { stdio: 'ignore' });
} catch {
const e = new Error('git not found in PATH');
e.code = 'ENOGIT';
throw e;
}
// which claude
const claudeBin = process.env.CLAUDE_BIN || which('claude');
if (!claudeBin) {
const e = new Error('claude CLI not found in PATH (set CLAUDE_BIN to override)');
e.code = 'ENOCLAUDE';
throw e;
}
// auth-mode detection + validation
const authMode = detectAuthMode();
validateAuthForCron(authMode); // throws EAUTHCRON if not safe
// ~/.claude.json onboarding flag (informational)
const claudeJson = readClaudeJson(join(homedir(), '.claude.json'));
const onboarded = claudeJson?.hasCompletedOnboarding === true;
if (!onboarded) {
log('WARN: ~/.claude.json missing or onboarding incomplete — cron may prompt');
}
// microsoft-learn MCP soft-warn
const mcpJsonPath = join(PLUGIN_ROOT, '.mcp.json');
if (!existsSync(mcpJsonPath)) {
log('WARN: plugin .mcp.json missing — Claude session may lack microsoft-learn');
}
// git status --porcelain clean check
let porcelain = '';
try {
porcelain = execFileSync('git', ['status', '--porcelain'], {
cwd: PLUGIN_ROOT,
encoding: 'utf8',
stdio: ['ignore', 'pipe', 'pipe'],
}).trim();
} catch (err) {
const e = new Error(`git status failed: ${err.message}`);
e.code = 'EGITSTATUS';
throw e;
}
if (porcelain) {
const e = new Error(`Working tree not clean:\n${porcelain}`);
e.code = 'EDIRTY';
throw e;
}
return { claudeBin, authMode };
}
// ---------- Main ----------
const runStartTs = Date.now();
let lockHandle = null;
let backupHandle = null;
let authMode = 'unauthenticated';
let claudeBin = null;
let updateFiles = [];
function bail(status, diagnostic, extra = {}) {
const duration = Math.round((Date.now() - runStartTs) / 1000);
writeStatus({
last_run_status: status,
auth_mode: authMode,
duration_seconds: duration,
diagnostic,
...extra,
});
if (backupHandle && status === 'failure') {
try {
log('Rolling back skills/ from backup...');
backupHandle.restore();
log('Rollback complete.');
} catch (err) {
log(`Rollback failed: ${err.message}`);
}
}
if (lockHandle) {
try { lockHandle.release(); } catch { /* best-effort */ }
}
process.exit(status === 'success' || status === 'partial' ? 0 : 1);
}
try {
log('=== Weekly KB Cron Start ===');
log(`Plugin root: ${PLUGIN_ROOT}`);
log(`Log file: ${LOG_FILE}`);
// Pre-flight
const pf = preflight();
claudeBin = pf.claudeBin;
authMode = pf.authMode;
log(`Auth mode: ${authMode}`);
log(`Claude bin: ${claudeBin}`);
// Lock
lockHandle = acquireLock(undefined, { staleThresholdMs: 2 * 60 * 60 * 1000 });
log(`Lock acquired: ${lockHandle.lockPath}`);
// Pipeline step 1: poll + report (+ optional discover)
const updateScript = join(__dirname, 'run-weekly-update.mjs');
const updateArgs = ['--force'];
if (ARGS.discover) updateArgs.push('--discover');
log(`Running ${updateScript} ${updateArgs.join(' ')}`);
try {
execFileSync('node', [updateScript, ...updateArgs], {
stdio: 'inherit',
timeout: 10 * 60 * 1000,
cwd: PLUGIN_ROOT,
});
} catch (err) {
bail('failure', `run-weekly-update.mjs failed: ${err.message}`);
}
// Read change report
const reportPath = join(DATA_DIR, 'change-report.json');
if (!existsSync(reportPath)) {
log('No change report produced. Treating as success (nothing to do).');
bail('success', null, { files_planned: 0, files_committed: 0 });
}
const report = JSON.parse(readFileSync(reportPath, 'utf8'));
const counts = report.by_priority || {};
log(`Change report: ${counts.critical || 0} critical, ${counts.high || 0} high, ${counts.medium || 0} medium`);
// Build updateFiles = critical + high (medium/low excluded per brief)
updateFiles = (report.files || []).filter(
(f) => f.priority === 'critical' || f.priority === 'high'
);
log(`Files to update: ${updateFiles.length} (critical + high)`);
if (updateFiles.length === 0) {
log('Nothing critical/high to update. Exiting clean.');
bail('success', null, { files_planned: 0, files_committed: 0 });
}
// Cost estimate + budget check
const cost = estimateCost(counts, { authMode });
log(`Estimated cost: ${cost.usd === null ? '(quota; subscription)' : `$${cost.usd.toFixed(2)}`} ` +
`(${cost.tokens_input} in / ${cost.tokens_output} out)`);
if (cost.kvote_warn) {
log('NOTE: Subscription auth — quota-bound, no $-cap applied.');
}
if (authMode === 'api-key' && cost.usd !== null && cost.usd > ARGS.budgetUsd) {
log(`Cost $${cost.usd.toFixed(2)} exceeds budget $${ARGS.budgetUsd.toFixed(2)} — aborting.`);
bail('budget_exceeded',
`Estimated $${cost.usd.toFixed(2)} > budget $${ARGS.budgetUsd.toFixed(2)}`,
{
files_planned: updateFiles.length,
files_committed: 0,
tokens_input: cost.tokens_input,
tokens_output: cost.tokens_output,
total_cost_usd: cost.usd,
});
}
// Backup skills/
const backupRoot = getBackupDir(PLUGIN_ROOT);
log(`Backing up ${SKILLS_DIR}${backupRoot}/<ts>/...`);
backupHandle = backupDir(SKILLS_DIR, backupRoot, { retentionDays: KB_BACKUP_DAYS });
log(`Backup: ${backupHandle.backupPath}`);
// Build prompt
const fileList = updateFiles.map((f) => {
const urls = (f.changed_urls || []).slice(0, 5).join('\n ');
return `- ${f.path} [${f.priority}]\n Changed URLs:\n ${urls}`;
}).join('\n');
const yyyymm = new Date().toISOString().slice(0, 7);
const prompt = `Du er Cosmo Skyberg. Oppdater ${updateFiles.length} stale kunnskapsreferanser i ms-ai-architect pluginen.
Arbeidsmappe: ${PLUGIN_ROOT}
## Filer å oppdatere
${fileList}
## For HVER fil
1. Les filen med Read
2. Bruk microsoft_docs_fetch de endrede kilde-URLene listet over
3. Bruk microsoft_docs_search for supplerende info
4. Oppdater filen med Edit:
- Oppdater "Last updated" til ${yyyymm}
- Oppdater utdaterte fakta, priser, datoer
- Bevar eksisterende struktur og seksjoner
- Marker oppdatert innhold med "Verified (MCP ${yyyymm})"
## Etter alle oppdateringer
1. Kjør: node scripts/kb-update/build-registry.mjs --merge
2. Kjør: node scripts/kb-update/report-changes.mjs
3. git add skills/ scripts/kb-update/data/
4. git commit -m "docs(architect): weekly KB update ${updateFiles.length} files refreshed
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>"
## Regler
- Aldri slett filer, kun oppdater
- Bruk Edit, ikke Write
- Bevar all eksisterende struktur
- Commit én gang ved slutt ikke per fil`;
// Spawn Claude (NEW flag stack)
const allowedTools = [
'Read', 'Edit', 'Write',
'Bash(git add:*)', 'Bash(git commit:*)', 'Bash(git push:*)',
'Bash(git status:*)', 'Bash(git diff:*)', 'Bash(git log:*)',
'mcp__microsoft-learn__microsoft_docs_search',
'mcp__microsoft-learn__microsoft_docs_fetch',
].join(',');
log(`Spawning Claude (model claude-sonnet-4-6, max-turns 200) with ${allowedTools.split(',').length} allowed tools...`);
const claudeResult = spawnSync(claudeBin, [
'-p', prompt,
'--permission-mode', 'dontAsk',
'--allowedTools', allowedTools,
'--max-turns', '200',
'--output-format', 'json',
'--model', 'claude-sonnet-4-6',
], {
cwd: PLUGIN_ROOT,
encoding: 'utf8',
timeout: 60 * 60 * 1000,
maxBuffer: 32 * 1024 * 1024,
});
// Parse output
let sessionMeta = {};
let maxTurnsHit = false;
if (claudeResult.stdout) {
try {
// --output-format json yields a single JSON object on stdout.
sessionMeta = JSON.parse(claudeResult.stdout);
const resultStr = String(sessionMeta.result ?? sessionMeta.stop_reason ?? '');
if (resultStr.includes('max_turns')) maxTurnsHit = true;
} catch (err) {
log(`WARN: could not parse Claude JSON output: ${err.message}`);
}
}
if (claudeResult.stderr) {
process.stderr.write(claudeResult.stderr);
}
// Post-run verification: count git commits since runStart
const runStartUnixSec = Math.floor(runStartTs / 1000);
let commitCount = 0;
try {
const log_out = execFileSync('git', ['log', `--since=@${runStartUnixSec}`, '--oneline'], {
cwd: PLUGIN_ROOT,
encoding: 'utf8',
stdio: ['ignore', 'pipe', 'pipe'],
});
commitCount = log_out.split('\n').filter((l) => l.trim().length > 0).length;
} catch (err) {
log(`WARN: git log post-run failed: ${err.message}`);
}
log(`Post-run: ${commitCount} commit(s) since runStart, planned ${updateFiles.length}.`);
// Branching
const claudeOk = claudeResult.status === 0;
let status = 'success';
let diagnostic = null;
if (!claudeOk) {
status = 'failure';
diagnostic = `claude exited ${claudeResult.status}` +
(claudeResult.signal ? ` (signal ${claudeResult.signal})` : '');
} else if (commitCount === 0 && updateFiles.length > 0) {
status = 'failure';
diagnostic = 'No commits produced despite expected files';
} else if (commitCount > 0 && commitCount < updateFiles.length && maxTurnsHit) {
status = 'partial';
diagnostic = `Hit max_turns: ${commitCount}/${updateFiles.length} files committed; rest will retry next week`;
} else if (commitCount > 0 && commitCount < updateFiles.length) {
// Partial without max_turns hit — treat as partial (Claude completed but
// some files weren't actionable). Conservative: don't roll back.
status = 'partial';
diagnostic = `Claude completed but only ${commitCount}/${updateFiles.length} files committed`;
} else {
status = 'success';
}
const totalCostUsd = typeof sessionMeta.total_cost_usd === 'number'
? sessionMeta.total_cost_usd
: null;
const sessionId = typeof sessionMeta.session_id === 'string'
? sessionMeta.session_id
: null;
const tokensIn = typeof sessionMeta?.usage?.input_tokens === 'number'
? sessionMeta.usage.input_tokens
: null;
const tokensOut = typeof sessionMeta?.usage?.output_tokens === 'number'
? sessionMeta.usage.output_tokens
: null;
const statusExtra = {
files_planned: updateFiles.length,
files_committed: commitCount,
session_id: sessionId,
total_cost_usd: totalCostUsd,
tokens_input: tokensIn,
tokens_output: tokensOut,
max_turns_hit: maxTurnsHit,
};
if (status === 'failure') {
bail('failure', diagnostic, statusExtra);
}
// success or partial: keep commits + optionally push
if (status === 'success' && autoPushEligible()) {
try {
execFileSync('git', ['push', 'origin', 'main'], {
cwd: PLUGIN_ROOT,
stdio: 'inherit',
});
log('Pushed origin/main.');
} catch (err) {
log(`WARN: git push failed (commits remain local): ${err.message}`);
}
}
// Cleanup old backups (best-effort, post-success)
try {
const cleanup = cleanupOldBackups(backupRoot, KB_BACKUP_DAYS);
if (cleanup.deleted.length > 0) {
log(`Cleaned up ${cleanup.deleted.length} old backup(s).`);
}
} catch (err) {
log(`WARN: cleanupOldBackups failed: ${err.message}`);
}
log(`=== Weekly KB Cron Done (${status}) ===`);
bail(status, diagnostic, statusExtra);
} catch (err) {
// Pre-flight or unexpected error before pipeline started.
const code = err && err.code ? err.code : 'EUNKNOWN';
log(`Pre-flight/error: [${code}] ${err.message}`);
if (err.stack) {
log(err.stack.split('\n').slice(1, 4).join('\n'));
}
bail('failure', `[${code}] ${err.message}`);
}
// ---------- Helpers ----------
function autoPushEligible() {
// Two gates: a configured user.email + a reachable origin.
try {
const email = execFileSync('git', ['config', '--get', 'user.email'], {
cwd: PLUGIN_ROOT,
encoding: 'utf8',
stdio: ['ignore', 'pipe', 'ignore'],
}).trim();
if (!email) return false;
} catch {
return false;
}
try {
execFileSync('git', ['ls-remote', 'origin', '--exit-code', 'HEAD'], {
cwd: PLUGIN_ROOT,
stdio: 'ignore',
timeout: 10_000,
});
return true;
} catch {
return false;
}
}

View file

@ -1,181 +0,0 @@
// tests/kb-update/test-auth-mode.test.mjs
// Unit tests for scripts/kb-update/lib/auth-mode.mjs
// Note: Test fixture credential values are deliberately short (<8 chars) to
// stay below the secrets-scanner heuristic. They are stub markers, not keys.
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { mkdtempSync, rmSync, writeFileSync } from 'node:fs';
import { tmpdir } from 'node:os';
import { join } from 'node:path';
import {
detectAuthMode,
validateAuthForCron,
readClaudeJson,
} from '../../scripts/kb-update/lib/auth-mode.mjs';
function withTmp(fn) {
const dir = mkdtempSync(join(tmpdir(), 'auth-test-'));
try {
return fn(dir);
} finally {
rmSync(dir, { recursive: true, force: true });
}
}
function makeStubRunner(exitCode) {
const calls = [];
const runner = (cmd, args) => {
calls.push({ cmd, args });
return exitCode;
};
return { runner, calls };
}
const MISSING_PATH = '/__definitely__/__not__/__a__/__path__/.claude.json';
test('detectAuthMode — ANTHROPIC_API_KEY set → api-key', () => {
const { runner } = makeStubRunner(0);
const mode = detectAuthMode({
env: { ANTHROPIC_API_KEY: 'fake' },
runner,
claudeJsonPath: MISSING_PATH,
});
assert.equal(mode, 'api-key');
});
test('detectAuthMode — empty ANTHROPIC_API_KEY is ignored', () => {
const { runner } = makeStubRunner(1);
const mode = detectAuthMode({
env: { ANTHROPIC_API_KEY: ' ' },
runner,
claudeJsonPath: MISSING_PATH,
});
assert.equal(mode, 'unauthenticated');
});
test('detectAuthMode — CLAUDE_CODE_OAUTH set → long-oauth', () => {
const { runner } = makeStubRunner(0);
const mode = detectAuthMode({
env: { CLAUDE_CODE_OAUTH_TOKEN: 'oat' },
runner,
claudeJsonPath: MISSING_PATH,
});
assert.equal(mode, 'long-oauth');
});
test('detectAuthMode — both env vars set → api-key precedence', () => {
const { runner } = makeStubRunner(0);
const mode = detectAuthMode({
env: {
ANTHROPIC_API_KEY: 'fake',
CLAUDE_CODE_OAUTH_TOKEN: 'oat',
},
runner,
claudeJsonPath: MISSING_PATH,
});
assert.equal(mode, 'api-key');
});
test('detectAuthMode — neither env, no claude.json → unauthenticated', () => {
const { runner, calls } = makeStubRunner(0);
const mode = detectAuthMode({
env: {},
runner,
claudeJsonPath: MISSING_PATH,
});
assert.equal(mode, 'unauthenticated');
// Runner must NOT be invoked when claude.json is unreadable.
assert.equal(calls.length, 0);
});
test('detectAuthMode — claude.json onboarded + runner exit 0 → subscription-browser-only', () => {
withTmp((tmp) => {
const path = join(tmp, '.claude.json');
writeFileSync(
path,
JSON.stringify({ hasCompletedOnboarding: true, userID: 'abc' }),
'utf8'
);
const { runner, calls } = makeStubRunner(0);
const mode = detectAuthMode({ env: {}, runner, claudeJsonPath: path });
assert.equal(mode, 'subscription-browser-only');
assert.deepEqual(calls, [{ cmd: 'claude', args: ['auth', 'status'] }]);
});
});
test('detectAuthMode — claude.json onboarded + runner exit 1 → unauthenticated', () => {
withTmp((tmp) => {
const path = join(tmp, '.claude.json');
writeFileSync(
path,
JSON.stringify({ hasCompletedOnboarding: true }),
'utf8'
);
const { runner } = makeStubRunner(1);
const mode = detectAuthMode({ env: {}, runner, claudeJsonPath: path });
assert.equal(mode, 'unauthenticated');
});
});
test('detectAuthMode — claude.json present but not onboarded → unauthenticated', () => {
withTmp((tmp) => {
const path = join(tmp, '.claude.json');
writeFileSync(
path,
JSON.stringify({ hasCompletedOnboarding: false }),
'utf8'
);
const { runner, calls } = makeStubRunner(0);
const mode = detectAuthMode({ env: {}, runner, claudeJsonPath: path });
assert.equal(mode, 'unauthenticated');
assert.equal(calls.length, 0);
});
});
test('readClaudeJson — returns parsed object on valid JSON', () => {
withTmp((tmp) => {
const path = join(tmp, '.claude.json');
writeFileSync(path, '{"hasCompletedOnboarding": true, "x": 42}', 'utf8');
const obj = readClaudeJson(path);
assert.deepEqual(obj, { hasCompletedOnboarding: true, x: 42 });
});
});
test('readClaudeJson — returns null on missing file', () => {
assert.equal(readClaudeJson(MISSING_PATH), null);
});
test('readClaudeJson — returns null on malformed JSON', () => {
withTmp((tmp) => {
const path = join(tmp, 'bad.json');
writeFileSync(path, 'not json {', 'utf8');
assert.equal(readClaudeJson(path), null);
});
});
test('validateAuthForCron — api-key passes silently', () => {
validateAuthForCron('api-key');
});
test('validateAuthForCron — long-oauth passes silently', () => {
validateAuthForCron('long-oauth');
});
test('validateAuthForCron — subscription-browser-only throws EAUTHCRON', () => {
assert.throws(
() => validateAuthForCron('subscription-browser-only'),
(err) =>
err.code === 'EAUTHCRON' &&
err.detectedMode === 'subscription-browser-only' &&
/claude setup-token/.test(err.message) &&
/ANTHROPIC_API_KEY/.test(err.message)
);
});
test('validateAuthForCron — unauthenticated throws EAUTHCRON', () => {
assert.throws(
() => validateAuthForCron('unauthenticated'),
(err) => err.code === 'EAUTHCRON' && err.detectedMode === 'unauthenticated'
);
});

View file

@ -1,82 +0,0 @@
// tests/kb-update/test-cost-estimat.test.mjs
// Unit tests for scripts/kb-update/lib/cost-estimat.mjs
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { estimateCost } from '../../scripts/kb-update/lib/cost-estimat.mjs';
test('estimateCost — api-key returns numeric usd, kvote_warn unset', () => {
const result = estimateCost({ critical: 3, high: 15 }, { authMode: 'api-key' });
assert.equal(typeof result.usd, 'number');
assert.equal(result.kvote_warn, false);
assert.ok(result.usd > 0);
});
test('estimateCost — api-key empty input returns 0 USD', () => {
const result = estimateCost({}, { authMode: 'api-key' });
assert.equal(result.usd, 0);
assert.equal(result.kvote_warn, false);
assert.equal(result.tokens_input, 0);
assert.equal(result.tokens_output, 0);
});
test('estimateCost — api-key tokens are integers', () => {
const result = estimateCost({ critical: 3, high: 15 }, { authMode: 'api-key' });
assert.equal(Number.isInteger(result.tokens_input), true);
assert.equal(Number.isInteger(result.tokens_output), true);
});
test('estimateCost — ignores medium and low (only critical+high counted)', () => {
const a = estimateCost({ critical: 1, high: 1 }, { authMode: 'api-key' });
const b = estimateCost({ critical: 1, high: 1, medium: 100, low: 100 }, { authMode: 'api-key' });
assert.equal(a.usd, b.usd);
assert.equal(a.tokens_input, b.tokens_input);
});
test('estimateCost — long-oauth returns null usd, kvote_warn flag set', () => {
const result = estimateCost({ critical: 3, high: 15 }, { authMode: 'long-oauth' });
assert.strictEqual(result.usd, null);
assert.strictEqual(result.kvote_warn, true);
});
test('estimateCost — subscription-browser-only returns null usd, kvote_warn flag set', () => {
const result = estimateCost({ critical: 3, high: 15 }, { authMode: 'subscription-browser-only' });
assert.strictEqual(result.usd, null);
assert.strictEqual(result.kvote_warn, true);
});
test('estimateCost — auth-mode does not affect token math', () => {
const apikey = estimateCost({ critical: 5, high: 10 }, { authMode: 'api-key' });
const oauth = estimateCost({ critical: 5, high: 10 }, { authMode: 'long-oauth' });
const sub = estimateCost({ critical: 5, high: 10 }, { authMode: 'subscription-browser-only' });
assert.equal(apikey.tokens_input, oauth.tokens_input);
assert.equal(apikey.tokens_input, sub.tokens_input);
assert.equal(apikey.tokens_output, oauth.tokens_output);
assert.equal(apikey.tokens_output, sub.tokens_output);
});
test('estimateCost — unauthenticated treated as best-effort api-key', () => {
const result = estimateCost({ critical: 3, high: 15 }, { authMode: 'unauthenticated' });
assert.equal(typeof result.usd, 'number');
assert.equal(result.kvote_warn, false);
});
test('estimateCost — missing authMode opt treated as best-effort api-key', () => {
const result = estimateCost({ critical: 3, high: 15 });
assert.equal(typeof result.usd, 'number');
assert.equal(result.kvote_warn, false);
});
test('estimateCost — unknown priority keys are ignored', () => {
const result = estimateCost({ critical: 1, high: 1, weird: 999 }, { authMode: 'api-key' });
// Should equal {critical:1, high:1} alone
const baseline = estimateCost({ critical: 1, high: 1 }, { authMode: 'api-key' });
assert.equal(result.usd, baseline.usd);
});
test('estimateCost — fixture {critical: 3, high: 15} produces expected order of magnitude', () => {
// 18 files * (3000 in + 1500 out) tokens = 54k in, 27k out
// api-key cost: 54k * $3/M + 27k * $15/M = $0.162 + $0.405 = $0.567
const result = estimateCost({ critical: 3, high: 15 }, { authMode: 'api-key' });
assert.ok(result.usd > 0.4 && result.usd < 0.8, `expected ~$0.567, got $${result.usd}`);
});

View file

@ -1,207 +0,0 @@
// tests/kb-update/test-install-cron.test.mjs
// Subprocess + filesystem-snapshot tests for scripts/install-kb-cron.mjs
// (Step 11). Exercises --print-only across targets and verifies idempotent
// --uninstall. Never invokes launchctl/systemctl/Register-ScheduledTask
// against real schedulers; --print-only short-circuits before any
// side-effecting call, and --uninstall is exercised against an empty HOME
// where the install file simply does not exist.
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { spawnSync } from 'node:child_process';
import { mkdtempSync, rmSync, readdirSync, existsSync } from 'node:fs';
import { tmpdir, platform as osPlatform } from 'node:os';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const SCRIPT = join(__dirname, '..', '..', 'scripts', 'install-kb-cron.mjs');
function mkSandbox() {
return mkdtempSync(join(tmpdir(), 'install-kb-cron-test-'));
}
function runInstall(args, env = {}) {
return spawnSync('node', [SCRIPT, ...args], {
env: { PATH: process.env.PATH, ...env },
encoding: 'utf8',
timeout: 30_000,
});
}
function snapshotDir(dir) {
const out = [];
function walk(d) {
if (!existsSync(d)) return;
let entries;
try {
entries = readdirSync(d, { withFileTypes: true });
} catch {
return;
}
for (const entry of entries) {
const p = join(d, entry.name);
out.push(p);
if (entry.isDirectory()) walk(p);
}
}
walk(dir);
return out.sort();
}
function hostTarget() {
const p = osPlatform();
if (p === 'darwin') return 'macos';
if (p === 'linux') return 'linux';
if (p === 'win32') return 'windows';
return null;
}
test('--print-only --target macos: substituted plist with no unsubstituted placeholders', () => {
const home = mkSandbox();
try {
const r = runInstall(['--print-only', '--target', 'macos'], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}\nstdout: ${r.stdout}`);
assert.match(r.stdout, /<key>Label<\/key>/);
assert.match(r.stdout, /<key>StartCalendarInterval<\/key>/);
assert.match(r.stdout, /<integer>3<\/integer>/, 'default day-of-week=3 (Wednesday)');
assert.match(r.stdout, /<integer>4<\/integer>/, 'default hour=4');
assert.match(r.stdout, /<integer>23<\/integer>/, 'default minute=23');
assert.doesNotMatch(r.stdout, /\{\{[A-Z_]+\}\}/, 'no unsubstituted {{...}} placeholders');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--print-only --target linux: filled service+timer with [Unit] and OnCalendar=Wed', () => {
const home = mkSandbox();
try {
const r = runInstall(['--print-only', '--target', 'linux'], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
assert.match(r.stdout, /\[Unit\]/);
assert.match(r.stdout, /\[Service\]/);
assert.match(r.stdout, /\[Timer\]/);
assert.match(r.stdout, /OnCalendar=Wed/);
assert.match(r.stdout, /ExecStart=/);
assert.doesNotMatch(r.stdout, /\{\{[A-Z_]+\}\}/, 'no unsubstituted {{...}} placeholders');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--print-only --target windows: Register-ScheduledTask + InteractiveToken', () => {
const home = mkSandbox();
try {
const r = runInstall(['--print-only', '--target', 'windows'], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
assert.match(r.stdout, /Register-ScheduledTask/);
assert.match(r.stdout, /InteractiveToken/);
assert.match(r.stdout, /-DaysOfWeek\s+Wednesday/);
assert.doesNotMatch(r.stdout, /\{\{[A-Z_]+\}\}/, 'no unsubstituted {{...}} placeholders');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--print-only writes no files (HOME snapshot before/after equal)', () => {
const home = mkSandbox();
try {
const before = snapshotDir(home);
const r = runInstall(['--print-only', '--target', 'macos'], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
const after = snapshotDir(home);
assert.deepEqual(after, before, 'HOME must not be touched in --print-only mode');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--print-only --target linux writes no files in HOME', () => {
const home = mkSandbox();
try {
const before = snapshotDir(home);
const r = runInstall(['--print-only', '--target', 'linux'], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
const after = snapshotDir(home);
assert.deepEqual(after, before, 'HOME must not be touched in --print-only mode (linux target)');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--uninstall is idempotent (exit 0 with nothing installed)', () => {
const home = mkSandbox();
try {
const target = hostTarget() || 'macos';
const r = runInstall(['--uninstall', '--target', target], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}\nstdout: ${r.stdout}`);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--uninstall --target macos on empty HOME: idempotent (no plist, no launchctl call)', () => {
const home = mkSandbox();
try {
const r = runInstall(['--uninstall', '--target', 'macos'], { HOME: home });
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
assert.match((r.stdout || '') + (r.stderr || ''), /nothing to remove|not installed/i);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--schedule with custom cron expr substitutes correctly into plist', () => {
const home = mkSandbox();
try {
const r = runInstall(
['--print-only', '--target', 'macos', '--schedule', '15 7 * * 5'],
{ HOME: home },
);
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
assert.match(r.stdout, /<integer>15<\/integer>/, 'minute=15');
assert.match(r.stdout, /<integer>7<\/integer>/, 'hour=7');
assert.match(r.stdout, /<integer>5<\/integer>/, 'day-of-week=5 (Friday)');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('invalid --target rejects with non-zero exit', () => {
const home = mkSandbox();
try {
const r = runInstall(['--print-only', '--target', 'bogus'], { HOME: home });
assert.notEqual(r.status, 0);
assert.match((r.stderr || '') + (r.stdout || ''), /target|invalid|unsupported/i);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('invalid --schedule rejects with non-zero exit', () => {
const home = mkSandbox();
try {
const r = runInstall(
['--print-only', '--target', 'macos', '--schedule', 'not a cron'],
{ HOME: home },
);
assert.notEqual(r.status, 0);
assert.match((r.stderr || '') + (r.stdout || ''), /schedule|cron|invalid/i);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--node-bin override appears in substituted plist', () => {
const home = mkSandbox();
try {
const r = runInstall(
['--print-only', '--target', 'macos', '--node-bin', '/opt/custom/bin/node'],
{ HOME: home },
);
assert.equal(r.status, 0, `stderr: ${r.stderr}`);
assert.match(r.stdout, /\/opt\/custom\/bin\/node/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});

View file

@ -1,192 +0,0 @@
// tests/kb-update/test-lock-file.test.mjs
// Unit tests for scripts/kb-update/lib/lock-file.mjs
import { test } from 'node:test';
import assert from 'node:assert/strict';
import {
mkdtempSync,
rmSync,
writeFileSync,
readFileSync,
existsSync,
utimesSync,
} from 'node:fs';
import { tmpdir } from 'node:os';
import { join } from 'node:path';
import {
acquireLock,
isPidAlive,
} from '../../scripts/kb-update/lib/lock-file.mjs';
const DEAD_PID = 99999999; // far above typical PID_MAX; reliably non-existent
function withTmp(fn) {
const dir = mkdtempSync(join(tmpdir(), 'lf-test-'));
try {
return fn(dir);
} finally {
rmSync(dir, { recursive: true, force: true });
}
}
function writeFakeLock(path, { pid, started, host = 'test-host', ageMs = 0 }) {
writeFileSync(
path,
JSON.stringify({
pid,
started: started ?? Date.now() - ageMs,
host,
version: 1,
}),
'utf8'
);
if (ageMs > 0) {
const past = new Date(Date.now() - ageMs);
utimesSync(path, past, past);
}
}
test('isPidAlive — current process is alive', () => {
assert.equal(isPidAlive(process.pid), true);
});
test('isPidAlive — non-existent PID is dead', () => {
assert.equal(isPidAlive(DEAD_PID), false);
});
test('isPidAlive — invalid input is dead', () => {
assert.equal(isPidAlive(0), false);
assert.equal(isPidAlive(-1), false);
assert.equal(isPidAlive(NaN), false);
assert.equal(isPidAlive(undefined), false);
});
test('acquireLock — creates lock file with current PID metadata', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
const lock = acquireLock(path, { registerCleanup: false });
try {
assert.equal(lock.lockPath, path);
assert.equal(existsSync(path), true);
const data = JSON.parse(readFileSync(path, 'utf8'));
assert.equal(data.pid, process.pid);
assert.equal(data.version, 1);
assert.equal(typeof data.started, 'number');
assert.equal(typeof data.host, 'string');
} finally {
lock.release();
}
});
});
test('acquireLock — second call same process throws ELOCKED', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
const lock = acquireLock(path, { registerCleanup: false });
try {
assert.throws(
() => acquireLock(path, { registerCleanup: false }),
(err) => err.code === 'ELOCKED' && err.holderPid === process.pid
);
} finally {
lock.release();
}
});
});
test('acquireLock — concurrent live holder (fixture lock-fil) throws ELOCKED', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
// Pre-write a lock as if held by another live process (we use process.pid
// as a stand-in for "guaranteed alive" without forking).
writeFakeLock(path, { pid: process.pid, ageMs: 0 });
assert.throws(
() => acquireLock(path, { registerCleanup: false }),
(err) => err.code === 'ELOCKED'
);
});
});
test('acquireLock — release deletes the lock file', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
const lock = acquireLock(path, { registerCleanup: false });
assert.equal(existsSync(path), true);
lock.release();
assert.equal(existsSync(path), false);
});
});
test('acquireLock — release on already-released lock is a no-op', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
const lock = acquireLock(path, { registerCleanup: false });
lock.release();
// Second release must not throw.
lock.release();
assert.equal(existsSync(path), false);
});
});
test('acquireLock — stale lock with dead PID + old mtime is cleaned', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
writeFakeLock(path, { pid: DEAD_PID, ageMs: 2 * 60 * 60 * 1000 });
const lock = acquireLock(path, { registerCleanup: false });
try {
const data = JSON.parse(readFileSync(path, 'utf8'));
assert.equal(data.pid, process.pid);
} finally {
lock.release();
}
});
});
test('acquireLock — stale lock with live PID but old mtime is also cleaned', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
// Live PID (us) but mtime older than default 1h threshold.
writeFakeLock(path, { pid: process.pid, ageMs: 2 * 60 * 60 * 1000 });
const lock = acquireLock(path, { registerCleanup: false });
try {
const data = JSON.parse(readFileSync(path, 'utf8'));
assert.equal(data.pid, process.pid);
// started is rewritten to fresh wallclock
assert.ok(Date.now() - data.started < 5000);
} finally {
lock.release();
}
});
});
test('acquireLock — fresh lock with live PID is NOT cleaned', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
writeFakeLock(path, { pid: process.pid, ageMs: 0 });
assert.throws(
() => acquireLock(path, { registerCleanup: false }),
(err) => err.code === 'ELOCKED' && err.holderPid === process.pid
);
});
});
test('acquireLock — staleThresholdMs is honored', () => {
withTmp((dir) => {
const path = join(dir, 'test.lock');
// 5s-old, live PID. Default 1h threshold → not stale → ELOCKED.
writeFakeLock(path, { pid: process.pid, ageMs: 5_000 });
assert.throws(
() => acquireLock(path, { registerCleanup: false }),
(err) => err.code === 'ELOCKED'
);
// Same fixture but threshold 1s → stale → cleaned.
writeFakeLock(path, { pid: process.pid, ageMs: 5_000 });
const lock = acquireLock(path, {
registerCleanup: false,
staleThresholdMs: 1_000,
});
lock.release();
assert.equal(existsSync(path), false);
});
});

View file

@ -1,172 +0,0 @@
// tests/kb-update/test-session-start-status.test.mjs
// Verifies that hooks/scripts/session-start-context.mjs surfaces the
// KB-update status file correctly per Status File Schema (plan.md L122-153).
//
// Same fixture statuses as test-weekly-kb-cron-flags.test.mjs so producer/
// consumer divergence is caught at test time.
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { spawnSync } from 'node:child_process';
import { mkdtempSync, mkdirSync, rmSync, writeFileSync } from 'node:fs';
import { tmpdir, platform as osPlatform } from 'node:os';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const HOOK = join(__dirname, '..', '..', 'hooks', 'scripts', 'session-start-context.mjs');
const PLUGIN_ROOT = join(__dirname, '..', '..');
function mkSandbox() {
return mkdtempSync(join(tmpdir(), 'sshook-test-'));
}
function cacheDirFor(home) {
if (osPlatform() === 'darwin') {
return join(home, 'Library', 'Caches', 'ms-ai-architect');
}
if (osPlatform() === 'win32') {
return join(home, 'AppData', 'Local', 'ms-ai-architect', 'Cache');
}
return join(home, '.cache', 'ms-ai-architect');
}
function writeStatus(home, status) {
const dir = cacheDirFor(home);
mkdirSync(dir, { recursive: true });
writeFileSync(join(dir, 'kb-update-status.json'), JSON.stringify(status, null, 2));
}
function runHook(home, extraEnv = {}) {
return spawnSync('node', [HOOK], {
env: {
PATH: process.env.PATH,
HOME: home,
CLAUDE_PLUGIN_ROOT: PLUGIN_ROOT,
...extraEnv,
},
encoding: 'utf8',
timeout: 10_000,
cwd: home, // not in plugin dir, so utredning/onboarding checks stay quiet
});
}
const baseStatus = {
schema_version: 1,
last_run_status: 'success',
last_run_ts: '2026-05-05T10:00:00Z',
duration_seconds: 412,
auth_mode: 'api-key',
log_file: '/tmp/kb-update.log',
files_planned: 18,
files_committed: 18,
session_id: 'sess_demo',
total_cost_usd: 1.42,
tokens_input: 54000,
tokens_output: 27000,
max_turns_hit: false,
diagnostic: null,
};
test('failure status surfaces "KB-update: failure" line', () => {
const home = mkSandbox();
try {
writeStatus(home, { ...baseStatus, last_run_status: 'failure', diagnostic: 'No commits produced' });
const result = runHook(home);
assert.equal(result.status, 0, `stderr: ${result.stderr}`);
assert.match(result.stdout, /KB-update: failure/);
assert.match(result.stdout, /2026-05-05T10:00:00Z/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('partial status surfaces "KB-update: partial" line', () => {
const home = mkSandbox();
try {
writeStatus(home, { ...baseStatus, last_run_status: 'partial', files_committed: 7, max_turns_hit: true });
const result = runHook(home);
assert.equal(result.status, 0);
assert.match(result.stdout, /KB-update: partial/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('budget_exceeded status surfaces a line', () => {
const home = mkSandbox();
try {
writeStatus(home, { ...baseStatus, last_run_status: 'budget_exceeded', files_committed: 0 });
const result = runHook(home);
assert.equal(result.status, 0);
assert.match(result.stdout, /KB-update: budget_exceeded/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('success status does NOT surface a KB-update line', () => {
const home = mkSandbox();
try {
writeStatus(home, { ...baseStatus, last_run_status: 'success' });
const result = runHook(home);
assert.equal(result.status, 0);
assert.doesNotMatch(result.stdout, /KB-update:/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('dry-run status does NOT surface a KB-update line', () => {
const home = mkSandbox();
try {
writeStatus(home, { ...baseStatus, last_run_status: 'dry-run' });
const result = runHook(home);
assert.equal(result.status, 0);
assert.doesNotMatch(result.stdout, /KB-update:/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('missing status file → hook still exits 0 with no KB-update line', () => {
const home = mkSandbox();
try {
// No status file written.
const result = runHook(home);
assert.equal(result.status, 0, `stderr: ${result.stderr}`);
assert.doesNotMatch(result.stdout, /KB-update:/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('malformed status file → hook tolerates and exits 0', () => {
const home = mkSandbox();
try {
const dir = cacheDirFor(home);
mkdirSync(dir, { recursive: true });
writeFileSync(join(dir, 'kb-update-status.json'), '{ this is: not, valid json');
const result = runHook(home);
assert.equal(result.status, 0);
assert.doesNotMatch(result.stdout, /KB-update:/);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('hook completes in < 1 second on warm filesystem', () => {
const home = mkSandbox();
try {
writeStatus(home, { ...baseStatus, last_run_status: 'failure' });
// Warm-up.
runHook(home);
const start = Date.now();
const result = runHook(home);
const elapsed = Date.now() - start;
assert.equal(result.status, 0);
assert.ok(elapsed < 1000, `hook took ${elapsed}ms (>1s)`);
} finally {
rmSync(home, { recursive: true, force: true });
}
});

View file

@ -1,98 +0,0 @@
// tests/kb-update/test-template-generation.test.mjs
// Structural-regex tests for scripts/kb-update/templates/* (Step 8).
// Verifies that each template file exists, contains the documented sentinel
// strings, and exposes the documented placeholder set. No template execution
// or real scheduling occurs in this test — that lives in Wave 6 live-test.
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { readFileSync, existsSync } from 'node:fs';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const TEMPLATES_DIR = join(__dirname, '..', '..', 'scripts', 'kb-update', 'templates');
const PLIST = join(TEMPLATES_DIR, 'com.fromaitochitta.ms-ai-architect.kb-update.plist');
const SERVICE = join(TEMPLATES_DIR, 'ms-ai-architect-kb-update.service');
const TIMER = join(TEMPLATES_DIR, 'ms-ai-architect-kb-update.timer');
const PS1 = join(TEMPLATES_DIR, 'ms-ai-architect-kb-update.ps1');
const README = join(TEMPLATES_DIR, 'README.md');
function readTpl(p) {
assert.equal(existsSync(p), true, `template missing: ${p}`);
return readFileSync(p, 'utf8');
}
test('plist — exists with required keys and placeholders', () => {
const content = readTpl(PLIST);
assert.match(content, /<key>Label<\/key>/);
assert.match(content, /<key>StartCalendarInterval<\/key>/);
assert.match(content, /<key>ProgramArguments<\/key>/);
assert.match(content, /<key>StandardOutPath<\/key>/);
assert.match(content, /<key>StandardErrorPath<\/key>/);
assert.match(content, /<key>EnvironmentVariables<\/key>/);
assert.match(content, /<key>RunAtLoad<\/key>\s*<false\/>/);
assert.match(content, /\{\{NODE_BIN\}\}/);
assert.match(content, /\{\{PLUGIN_ROOT\}\}/);
assert.match(content, /\{\{LOG_FILE\}\}/);
assert.match(content, /\{\{SCHEDULE_HOUR\}\}/);
assert.match(content, /\{\{SCHEDULE_MINUTE\}\}/);
assert.match(content, /\{\{SCHEDULE_DAY_OF_WEEK\}\}/);
});
test('systemd .timer — exists with OnCalendar and Persistent', () => {
const content = readTpl(TIMER);
assert.match(content, /\[Unit\]/);
assert.match(content, /\[Timer\]/);
assert.match(content, /\[Install\]/);
assert.match(content, /OnCalendar=Wed/);
assert.match(content, /Persistent=true/);
assert.match(content, /WantedBy=timers\.target/);
});
test('systemd .service — exists with [Unit], [Service] and ExecStart', () => {
const content = readTpl(SERVICE);
assert.match(content, /\[Unit\]/);
assert.match(content, /\[Service\]/);
assert.match(content, /ExecStart=/);
assert.match(content, /\{\{NODE_BIN\}\}/);
assert.match(content, /\{\{PLUGIN_ROOT\}\}/);
});
test('PowerShell ps1 — exists with Register-ScheduledTask and InteractiveToken', () => {
const content = readTpl(PS1);
assert.match(content, /Register-ScheduledTask/);
assert.match(content, /InteractiveToken/);
assert.match(content, /New-ScheduledTaskTrigger/);
assert.match(content, /-Weekly/);
assert.match(content, /-DaysOfWeek\s+Wednesday/);
assert.match(content, /\{\{NODE_BIN\}\}/);
assert.match(content, /\{\{PLUGIN_ROOT\}\}/);
});
test('README — exists and references each template by filename', () => {
const content = readTpl(README);
assert.match(content, /com\.fromaitochitta\.ms-ai-architect\.kb-update\.plist/);
assert.match(content, /ms-ai-architect-kb-update\.service/);
assert.match(content, /ms-ai-architect-kb-update\.timer/);
assert.match(content, /ms-ai-architect-kb-update\.ps1/);
});
test('plist + service + ps1 reference NODE_BIN and PLUGIN_ROOT', () => {
// The .timer is a pure trigger — it activates the .service, which is
// the only systemd unit that needs to know the binary + plugin root.
// launchd and Windows put the command directly in the trigger spec, so
// they need both placeholders themselves.
for (const tpl of [PLIST, SERVICE, PS1]) {
const content = readFileSync(tpl, 'utf8');
assert.match(content, /\{\{NODE_BIN\}\}/, `${tpl} missing NODE_BIN placeholder`);
assert.match(content, /\{\{PLUGIN_ROOT\}\}/, `${tpl} missing PLUGIN_ROOT placeholder`);
}
});
test('.timer is placeholder-free literal (Wed 04:23 hardcoded per plan)', () => {
const content = readFileSync(TIMER, 'utf8');
assert.match(content, /OnCalendar=Wed \*-\*-\* 04:23:00/);
assert.doesNotMatch(content, /\{\{[A-Z_]+\}\}/);
});

View file

@ -1,126 +0,0 @@
// tests/kb-update/test-weekly-kb-cron-flags.test.mjs
// Subprocess-based flag-parsing tests for scripts/kb-update/weekly-kb-cron.mjs
// (Step 9). Avoids real Claude spawn by exercising --dry-run + auth-failure
// fast-path. Full e2e is reserved for Wave 6 live-test.
//
// The cron writes its status file to <getCacheDir('ms-ai-architect')>, which
// on darwin resolves to $HOME/Library/Caches/ms-ai-architect/. Setting HOME
// in the subprocess env therefore points all path resolution at a tmp dir,
// keeping the test isolated from the real machine.
import { test } from 'node:test';
import assert from 'node:assert/strict';
import { spawnSync } from 'node:child_process';
import { mkdtempSync, rmSync, existsSync, readFileSync } from 'node:fs';
import { tmpdir, platform as osPlatform } from 'node:os';
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const CRON = join(__dirname, '..', '..', 'scripts', 'kb-update', 'weekly-kb-cron.mjs');
function mkSandbox() {
return mkdtempSync(join(tmpdir(), 'cron-test-'));
}
function runCron(extraArgs, env = {}) {
return spawnSync('node', [CRON, ...extraArgs], {
env: { PATH: process.env.PATH, ...env },
encoding: 'utf8',
timeout: 30_000,
});
}
function statusFilePath(home) {
if (osPlatform() === 'darwin') {
return join(home, 'Library', 'Caches', 'ms-ai-architect', 'kb-update-status.json');
}
if (osPlatform() === 'win32') {
return join(home, 'AppData', 'Local', 'ms-ai-architect', 'Cache', 'kb-update-status.json');
}
return join(home, '.cache', 'ms-ai-architect', 'kb-update-status.json');
}
test('--dry-run exits 0 with dry-run status, no Claude spawn', () => {
const home = mkSandbox();
try {
const result = runCron(['--dry-run'], {
HOME: home,
ANTHROPIC_API_KEY: '',
CLAUDE_CODE_OAUTH_TOKEN: '',
});
assert.equal(result.status, 0, `stderr: ${result.stderr}\nstdout: ${result.stdout}`);
assert.match(result.stdout, /DRY RUN/i);
const sf = statusFilePath(home);
assert.equal(existsSync(sf), true, `status file missing at ${sf}`);
const status = JSON.parse(readFileSync(sf, 'utf8'));
assert.equal(status.schema_version, 1);
assert.equal(status.last_run_status, 'dry-run');
assert.equal(typeof status.last_run_ts, 'string');
assert.match(status.last_run_ts, /^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}/);
assert.equal(typeof status.auth_mode, 'string');
assert.equal(typeof status.log_file, 'string');
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('missing auth (no --dry-run) fails fast with auth-related error', () => {
const home = mkSandbox();
try {
const result = runCron([], {
HOME: home,
ANTHROPIC_API_KEY: '',
CLAUDE_CODE_OAUTH_TOKEN: '',
});
assert.notEqual(result.status, 0, 'cron should exit non-zero on missing auth');
const combined = (result.stdout || '') + '\n' + (result.stderr || '');
assert.match(
combined,
/not safe for cron|unauthenticated|EAUTHCRON|auth/i,
`expected auth error in output. stdout: ${result.stdout}\nstderr: ${result.stderr}`
);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--budget-usd flag parsed and reflected in dry-run plan', () => {
const home = mkSandbox();
try {
const result = runCron(['--dry-run', '--budget-usd=12.50'], {
HOME: home,
ANTHROPIC_API_KEY: '',
CLAUDE_CODE_OAUTH_TOKEN: '',
});
assert.equal(result.status, 0, `stderr: ${result.stderr}`);
assert.match(
result.stdout,
/(budget|Budget)[^\n]*12\.50|12\.5/,
`expected 12.50 in dry-run output: ${result.stdout}`
);
} finally {
rmSync(home, { recursive: true, force: true });
}
});
test('--dry-run writes status file even with no change-report present', () => {
const home = mkSandbox();
try {
const result = runCron(['--dry-run'], {
HOME: home,
ANTHROPIC_API_KEY: '',
CLAUDE_CODE_OAUTH_TOKEN: '',
});
assert.equal(result.status, 0);
const sf = statusFilePath(home);
const status = JSON.parse(readFileSync(sf, 'utf8'));
// Required fields per Status File Schema (plan.md L122-153)
for (const key of ['schema_version', 'last_run_status', 'last_run_ts', 'auth_mode', 'log_file', 'diagnostic']) {
assert.ok(Object.prototype.hasOwnProperty.call(status, key), `missing required field: ${key}`);
}
} finally {
rmSync(home, { recursive: true, force: true });
}
});

View file

@ -79,7 +79,14 @@ if $RUN_PLAYGROUND; then
fi
if $RUN_KB_UPDATE; then
bash "$SCRIPT_DIR/test-kb-update.sh" || FAILURES=$((FAILURES + 1))
echo -e "${CYAN}─── KB Update utilities ───${NC}"
PLUGIN_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
if (cd "$PLUGIN_ROOT" && node --test tests/kb-update/*.test.mjs); then
echo -e "${GREEN}KB Update: PASS${NC}"
else
echo -e "${RED}KB Update: FAIL${NC}"
FAILURES=$((FAILURES + 1))
fi
fi
echo -e "${CYAN}══════════════════════════════════════════════${NC}"

View file

@ -1,44 +0,0 @@
#!/bin/bash
# test-kb-update.sh — Run KB-update node:test suite via the E2E harness.
# Bash 3.2-compatible. Sources lib/e2e-helpers.sh, runs node --test against
# the kb-update glob (Node 25 rejects directory-form arguments to --test),
# and translates the result into the suite's pass/fail counters.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PLUGIN_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
source "$SCRIPT_DIR/lib/e2e-helpers.sh"
init_suite "KB Update"
cd "$PLUGIN_ROOT"
if ! compgen -G "tests/kb-update/*.test.mjs" >/dev/null 2>&1; then
fail "No test files matched tests/kb-update/*.test.mjs"
print_summary
exit 1
fi
TEST_LOG="$(mktemp -t kb-update-suite.XXXXXX)"
trap 'rm -f "$TEST_LOG"' EXIT
NODE_EXIT=0
node --test tests/kb-update/*.test.mjs >"$TEST_LOG" 2>&1 || NODE_EXIT=$?
cat "$TEST_LOG"
PASS_COUNT="$(awk '/^[^[:alnum:]]*pass[[:space:]]+[0-9]+/ { for (i=1;i<=NF;i++) if ($i=="pass") { print $(i+1); exit } }' "$TEST_LOG")"
FAIL_COUNT="$(awk '/^[^[:alnum:]]*fail[[:space:]]+[0-9]+/ { for (i=1;i<=NF;i++) if ($i=="fail") { print $(i+1); exit } }' "$TEST_LOG")"
TESTS_COUNT="$(awk '/^[^[:alnum:]]*tests[[:space:]]+[0-9]+/ { for (i=1;i<=NF;i++) if ($i=="tests") { print $(i+1); exit } }' "$TEST_LOG")"
PASS_COUNT="${PASS_COUNT:-0}"
FAIL_COUNT="${FAIL_COUNT:-0}"
TESTS_COUNT="${TESTS_COUNT:-0}"
if [ "$NODE_EXIT" -eq 0 ] && [ "$FAIL_COUNT" -eq 0 ]; then
pass "node --test tests/kb-update/*.test.mjs ($PASS_COUNT/$TESTS_COUNT pass)"
else
fail "node --test failed (pass=$PASS_COUNT, fail=$FAIL_COUNT, tests=$TESTS_COUNT, exit=$NODE_EXIT)"
fi
print_summary

View file

@ -151,10 +151,10 @@ for f in "$PLUGIN_ROOT"/commands/*.md; do
fail "Command-ID '${cmd_id}' mangler i v3 HTML"
fi
done
if [ "$cmd_count" -eq 24 ]; then
pass "24 command-filer funnet i commands/ (forventet 24)"
if [ "$cmd_count" -eq 25 ]; then
pass "25 command-filer funnet i commands/ (forventet 25)"
else
fail "Forventet 24 command-filer, fant $cmd_count"
fail "Forventet 25 command-filer, fant $cmd_count"
fi
# -------------------------------------------------------