Last batch in HIGH bucket. Combined with 82bd665 (critical 9 + high batch 1, 21 files), this finishes the critical+high KB-refresh sweep for v1.12.0.
Substantive edits (3 files):
- security-copilot-integration.md: M365 E5/E7 inclusion auto-provisioning, agents-first landing experience, role-based onboarding (Verified MCP 2026-05)
- entra-agent-id-zero-trust.md: Ignite 2025-utvidelser — Conditional Access for agenter, Risky agents, 3 nye Agent ID-roller, Microsoft Agent Identity Platform, Copilot Studio blueprint principal
- ai-center-of-excellence-setup.md: Ny "Oppdateringer 2026-05"-seksjon — tre-roller-modell (platform/workload/CoE), agent-ferdighetsområder, sentralisert→rådgivende operasjonsmodell
Date-bump (20 files):
- HIGH-bucket filer der MCP-fetch viste kosmetiske endringer (forrige sesjons lærdom replikert)
Tests: validate-plugin.sh PASS 219.
35 KiB
AI Incident Response and Breach Handling Procedures
Last updated: 2026-05 | Verified: MCP 2026-05 Status: Established Practice Category: AI Security Engineering
Introduksjon
Effektiv håndtering av sikkerhetsbrudd i AI-systemer krever spesialiserte prosedyrer som adresserer både tradisjonelle cybersecurity-trusler og AI-spesifikke sårbarheter som data poisoning, model inversion og prompt injection. Moderne AI-systemer opererer i komplekse økosystemer hvor angrep kan manifestere seg på tvers av datalag, treningsinfrastruktur, inferens-endepunkter og integrasjoner med forretningsapplikasjoner.
Microsoft Azure tilbyr omfattende verktøy for incident response gjennom Microsoft Defender XDR, Microsoft Sentinel, og Azure-native forensics-kapabiliteter. En systematisk incident response-prosess sikrer rask deteksjon, effektiv containment, grundig forensisk analyse og læring som styrker organisasjonens modenhet over tid.
Incident response for AI-systemer følger NIST SP 800-61-rammeverket med fire hovedfaser: (1) Preparation — etablering av planer, verktøy og team-struktur før hendelser oppstår, (2) Detection and Analysis — høykvalitetsalarmering og systematisk etterforskning med AI-spesifikk kontekst, (3) Containment, Eradication and Recovery — rask isolering, fjerning av trusler og gjenoppretting av systemer, og (4) Post-Incident Activity — lessons learned og bevisbevaring for compliance og fremtidig forbedring.
Kjernekomponenter
1. Incident Detection Triggers (AI-specific)
AI-systemer krever spesialiserte deteksjonsmekanismer utover tradisjonell SIEM-monitorering:
| Trigger Type | Detection Method | Azure Tool |
|---|---|---|
| Data Poisoning | Anomaly detection i treningsdata-distribusjon, uventet modell-accuracy drop | Azure AI Anomaly Detector, Microsoft Purview |
| Model Inversion | Uvanlig query-mønster med høy confidence-score targeting, rate limit violations | Azure API Management analytics, Microsoft Sentinel |
| Prompt Injection | Malicious prompt patterns, jailbreak-forsøk, uautoriserte systemkommandoer | Azure AI Content Safety, custom detection rules |
| Model Theft | Path-finding queries, equation-solving patterns, ekstremt høyt query-volum | Azure Monitor Log Analytics, API request profiling |
| Adversarial Examples | Input med lave confidence-scores på kjente data, batch-misklassifikasjoner | Model monitoring dashboards, drift detection |
| Backdoor Attacks | Targeted misklassifisering på spesifikke input-patterns, trojaned model artifacts | ML-BOM tracking (OWASP CycloneDX), supply chain audit |
Microsoft-stack integrasjon: (Verified MCP 2026-04)
- Microsoft Defender for AI Services / AI Security Posture Management — Automatisk deteksjon og remediation av generative AI-risiko på tvers av Azure-miljøet (Microsoft Defender for Cloud)
- Microsoft Purview Insider Risk Management — Integrerer med andre security-suites for å vurdere enterprise-wide datarisiko og identifisere risky AI-atferdsmønstre og prompt-basert data exfiltration
- Microsoft Sentinel AI/ML Analytics — Custom KQL-queries for deteksjon av anomalous model behavior og data exfiltration-patterns
- Azure API Management — Sikrer Model Context Protocol (MCP) server-endepunkter som del av AI communication channel security
2. Response Playbooks (AI-Specific)
Automatiserte responsprosedyrer tilpasset AI-hendelser:
Playbook A: Data Poisoning Response
- Isoler påvirket treningsdata-kilde (Azure Storage/Data Lake private endpoints)
- Snapshot modell før quarantine (Azure ML model registry versioning)
- Kjør data integrity validation på alle treningsdata (custom scripts + Purview DLP)
- Retrain modell fra validert clean backup
- Deploy canary deployment med A/B testing før full rollout
Playbook B: Model Compromise Response
- Revoke API keys for påvirket modell (Azure Key Vault rotation)
- Enable model access audit logging (Azure Monitor + diagnostics)
- Forensisk analyse av model artifacts (Azure Blob immutable storage inspection)
- Re-deploy modell fra verified source med ny endpoint
- Notify downstream consumers om endpoint-endring
Playbook C: Prompt Injection Incident
- Block malicious user/IP via Azure API Management policy
- Enable enhanced input filtering (Azure AI Content Safety strict mode)
- Analyze attack patterns for detection rule tuning
- Implement guardrails: system message hardening, output sanitization
- Red team testing med PYRIT for validation
Playbook D: Insider Threat (Model/Data Exfiltration)
- Suspend user via Microsoft Entra ID Conditional Access
- Isoler påvirket VM/Container (NSG rule modification via automation)
- Forensisk snapshot av user workspace (Azure VM snapshot + memory dump)
- Audit all data access logs (Azure Monitor + Purview Access audit)
- Legal hold på alle artifacts (Azure Storage immutable policy)
3. Containment Strategies
AI-spesifikke containment-taktikker krever både tradisjonelle nettverksisolering og ML-pipeline-isolering:
| Strategy | Implementation | Speed | Impact |
|---|---|---|---|
| Network Isolation | NSG rule modification, Azure Firewall block, VNET peering removal | Seconds | Full model unavailability |
| API Rate Limiting | Azure API Management throttling policies | Immediate | Degraded performance for legitimate users |
| Model Endpoint Disable | Azure ML endpoint deactivation, DNS record removal | Minutes | Complete service outage |
| Credential Revocation | Key Vault secret rotation, SAS token invalidation, MSI disable | Seconds | Re-authentication required |
| Training Pipeline Halt | Azure ML pipeline cancellation, compute cluster shutdown | Minutes | Stops active model updates |
| Read-Only Mode | Remove write permissions on ML workspace, lock ARM resources | Minutes | Prevents further model/data changes |
Automatisering via Azure Automation runbooks:
# Eksempel: Automated VM isolation ved high-severity alert
workflow Isolate-CompromisedVM {
param([string]$VMResourceId, [string]$IncidentId)
$nsg = Get-AzNetworkSecurityGroup -ResourceId $VMResourceId
Add-AzNetworkSecurityRuleConfig -NetworkSecurityGroup $nsg `
-Name "Block-All-Incident-$IncidentId" `
-Priority 100 -Access Deny -Protocol * -Direction Inbound `
-SourceAddressPrefix * -DestinationAddressPrefix *
Set-AzNetworkSecurityGroup -NetworkSecurityGroup $nsg
# Preserve forensic evidence
New-AzSnapshot -SnapshotName "Forensic-$IncidentId" -Disk $vmDisk
}
4. Forensics and Logging
AI-incident forensics krever innsamling av både tradisjonelle system-logs og ML-spesifikke artifacts:
Critical Evidence Sources:
- Model Artifacts: Trained model binaries, configuration files, hyperparameters (Azure ML model registry)
- Training Data Snapshots: Data used for training with version/timestamp (Azure Data Lake snapshots)
- Inference Logs: All prediction requests/responses med timestamps og user context (Azure Monitor Application Insights)
- API Access Logs: Full audit trail of API calls med IP, user, query content (Azure API Management analytics)
- System Logs: Azure Activity Logs, NSG Flow Logs, Microsoft Entra ID sign-in/audit logs
- Memory Dumps: VM memory state ved suspected compromise (Azure VM diagnostics extension)
- Network Packet Captures: Azure Network Watcher packet capture for lateral movement analysis
Immutable Evidence Storage:
{
"storageAccount": "forensicstorage",
"immutabilityPolicy": {
"immutabilityPeriodSinceCreationInDays": 2190,
"allowProtectedAppendWrites": false,
"state": "Locked"
},
"legalHold": {
"tags": ["incident-2026-02-001", "model-theft-investigation"],
"enabled": true
}
}
Chain of Custody Automation:
- Cryptographic hashing av alle innsamlede artifacts (SHA-256)
- Digital signatures med Azure Key Vault managed certificates
- Access logging med Microsoft Entra ID audit trail
- Tamper-evident storage med Azure Blob versioning enabled
5. Post-Incident Analysis
Systematisk lessons learned-prosess for kontinuerlig forbedring:
Root Cause Analysis Framework:
- Timeline Reconstruction — Full incident timeline fra initial access til containment
- Attack Vector Identification — Hvordan kom angriperen inn? (MITRE ATT&CK for ML mapping)
- Control Gap Assessment — Hvilke security controls feilet eller manglet?
- Impact Quantification — Business impact, data exposure, regulatory implications
- Improvement Recommendations — Konkrete tiltak med owners og deadlines
Metrics to Track:
| Metric | Target | Measurement |
|---|---|---|
| Mean Time to Detect (MTTD) | < 15 min | Time from attack start to first alert |
| Mean Time to Respond (MTTR) | < 30 min | Time from alert to containment action |
| False Positive Rate | < 5% | Percentage of alerts requiring no action |
| Recurring Incident Rate | < 10% | Incidents with same root cause repeating |
| Evidence Preservation Success | 100% | Percentage of incidents with complete forensic evidence |
Azure DevOps Integration:
- Automated work item creation for hver improvement recommendation
- Tracking av remediation progress med burndown charts
- Integration med security roadmap for strategic planning
Arkitekturmønstre
Mønster 1: Automated Response with Human Oversight (SOAR)
Scenario: High-volume alerts krever rask automated containment, men kritiske beslutninger trenger human validation.
Arkitektur:
Microsoft Sentinel (SIEM)
→ Analytics Rules (AI-specific threat detection)
→ Automated Playbook (Logic Apps)
→ Containment Actions (automated: API block, rate limit)
→ Approval Workflow (Microsoft Teams Adaptive Card)
→ Human Decision (approve/reject/escalate)
→ Final Actions (VM isolation, model rollback)
→ Ticket Creation (Azure DevOps / ServiceNow)
Fordeler:
- ⚡ Rask automated containment for velkjente threats (seconds)
- 🛡️ Human oversight for business-critical decisions
- 📊 Complete audit trail med approval history
Ulemper:
- ⏱️ Approval delays kan gi angriper window of opportunity
- 🧑💼 Requires 24/7 on-call human responders
- 💸 Logic Apps execution costs ved høyt alert-volum
Best practices:
- Pre-approve low-risk automated actions (API rate limiting)
- Timeout-basert auto-approval for critical incidents (ransomware)
- Multi-factor approval for production model deletion
Mønster 2: Defense-in-Depth Forensics (Multi-Layer Evidence Collection)
Scenario: AI-hendelser krever korrelering av data fra ML-lag, infrastruktur-lag og applikasjonslag.
Arkitektur:
Layer 1: ML Observability (Azure ML monitoring, model drift detection)
Layer 2: Application Layer (API Gateway logs, Application Insights traces)
Layer 3: Infrastructure (NSG flow logs, VM diagnostics, Azure Activity Logs)
Layer 4: Identity (Entra ID sign-in/audit logs, PIM activation logs)
Layer 5: Network (Network Watcher packet capture, ExpressRoute monitoring)
All layers → Azure Log Analytics → Microsoft Sentinel (unified investigation graph)
Fordeler:
- 🔍 Complete attack visibility på tvers av alle lag
- 🧩 Entity correlation (user → device → model → data)
- 📈 Timeline reconstruction med cross-layer event correlation
Ulemper:
- 💾 Massive storage costs for comprehensive logging
- 🔧 Complex query-building for cross-layer investigation (KQL expertise required)
- ⚠️ Signal overload without proper alert tuning
Best practices:
- Tiered logging retention (hot: 30 days, warm: 90 days, cold: 1 year for compliance)
- Pre-built KQL queries for common AI incident scenarios
- Entity behavior analytics (UEBA) for automatic anomaly surfacing
Mønster 3: Immutable Infrastructure Response (Cattle, Not Pets)
Scenario: Suspected compromise krever full system replacement heller enn cleanup.
Arkitektur:
Detection → Incident Declared → Automated Actions:
1. Snapshot compromised resource (Azure VM snapshot / Container image save)
2. Deploy clean replacement from known-good image (Infrastructure-as-Code)
3. Redirect traffic via Azure Front Door / Traffic Manager
4. Forensic analysis på isolated snapshot
5. Destroy compromised resource efter evidence collection
Fordeler:
- 🚀 Fastest recovery time (minutes vs. hours of cleanup)
- 🛡️ Eliminates persistence risk (no hidden backdoors survive)
- 🔬 Pristine forensic environment (no contamination during analysis)
Ulemper:
- 💸 Requires mature IaC practice and automated deployment pipelines
- 🗂️ Stateful data recovery complexity (databases, ML model state)
- 📋 May lose short-term data not committed to persistent storage
Best practices:
- Git-backed IaC for all infrastructure (Terraform/Bicep)
- Continuous backup of stateful components (Azure Backup, geo-redundant storage)
- Blue-green deployment for zero-downtime model replacement
Beslutningsveiledning
Severity Assessment for AI Incidents
| Factor | Critical | High | Medium | Low |
|---|---|---|---|---|
| Data Exposure | PII/PHI breached | Proprietary training data accessed | Internal test data exposed | No sensitive data |
| Model Impact | Production model poisoned | Model theft confirmed | Model drift detected | Performance degradation |
| Service Availability | Complete service outage | Degraded performance | Intermittent errors | No user impact |
| Regulatory Implications | GDPR/HIPAA breach (72h notification) | PCI-DSS incident | Internal audit finding | No compliance impact |
| Attack Sophistication | Nation-state APT indicators | Organized crime patterns | Opportunistic attack | Script kiddie |
Decision Tree: To Contain or Not To Contain?
Incident Detected
├─ Is it affecting production models?
│ ├─ YES → Immediate containment (isolate endpoint)
│ └─ NO → Continue to next check
│
├─ Is sensitive data at risk?
│ ├─ YES → Immediate containment (revoke access)
│ └─ NO → Continue to next check
│
├─ Is attack still active?
│ ├─ YES → Immediate containment (block attacker)
│ └─ NO → Forensic analysis first (don't contaminate evidence)
│
└─ Is containment reversible?
├─ YES → Contain and investigate
└─ NO → Seek approval before action (executive escalation)
Vanlige Feil
- Premature Evidence Destruction: Sletting av logs eller snapshots før forensic analysis er fullført (FEIL: Alltid preserve først, analyze senere)
- Over-Containment: Full production shutdown uten vurdering av business impact (FEIL: Gradered containment basert på threat severity)
- Under-Notification: Manglende varsling til legal/compliance teams ved data breach (FEIL: Always notify stakeholders early)
- Ignoring AI Supply Chain: Ikke sjekke third-party model providers ved backdoor-suspects (FEIL: MLOps supply chain audit må inkluderes)
- Manual Response Only: Ingen automated playbooks for velkjente AI threats (FEIL: Automate repetitive tasks, humans for complex decisions)
Røde Flagg (Immediate Escalation Required)
- 🚨 Model accuracy drop > 20% in production → Suspect data poisoning or adversarial attack
- 🚨 Unusual query patterns with 100% confidence targeting specific outputs → Model inversion attempt
- 🚨 API keys accessed from unknown geography → Credential theft, potential model theft in progress
- 🚨 Training pipeline triggered outside maintenance window → Unauthorized model retraining (possible backdoor injection)
- 🚨 Mass export of training data to external storage → Data exfiltration, insider threat
- 🚨 Prompt injection signatures detected in production logs → Active jailbreak attempt, potential service abuse
Integrasjon med Microsoft-stakken
Azure-Native Incident Response Stack
| Capability | Azure Service | Key Feature for AI Incidents |
|---|---|---|
| Threat Detection | Microsoft Defender for AI Services | AI-specific threat patterns (MITRE ATLAS) |
| SIEM/SOAR | Microsoft Sentinel | Unified incident management, automated playbooks |
| XDR | Microsoft Defender XDR | Cross-platform signal correlation (M365, Azure, endpoints) |
| Forensics | Azure Monitor + Log Analytics | KQL-based investigation, 30-day hot retention |
| Evidence Preservation | Azure Blob Immutable Storage | Legal hold, time-based retention policies (6 years HIPAA) |
| Identity Response | Microsoft Entra ID + PIM | Conditional Access, automated account suspension |
| Network Isolation | Azure Firewall + NSG | Automated rule deployment via Logic Apps |
| Model Governance | Azure ML + Purview | Model lineage tracking, data classification |
Sample Integration: Sentinel Playbook for AI Model Poisoning
Trigger: Azure ML model drift alert (accuracy drop detected)
Automated Actions:
- Gather Context (HTTP action to Azure ML REST API for model metrics)
- Create Sentinel Incident (severity: High, type: Data Poisoning Suspected)
- Notify Stakeholders (Microsoft Teams adaptive card to ML engineers + security team)
- Isolate Model (Azure ML endpoint deactivation via ARM API)
- Snapshot Evidence (Azure Storage copy of model artifact to forensic container)
- Approval Workflow (Wait for ML engineer validation: false positive or genuine attack?)
- Rollback or Investigate (if genuine: rollback to previous model version + forensic deep-dive)
- Create Work Item (Azure DevOps task for root cause analysis + remediation)
Logic Apps Connector Usage:
- Azure Monitor (trigger condition)
- Azure ML (model metadata retrieval)
- Microsoft Sentinel (incident creation)
- Microsoft Teams (notifications)
- Azure Resource Manager (infrastructure actions)
- Azure DevOps (work tracking)
Microsoft Security Contact Configuration
Critical Step: Configure security contacts i Microsoft Defender for Cloud for å motta incident-notifikasjoner fra Microsoft:
# PowerShell example
Set-AzSecurityContact -Name "default1" `
-Email "security-team@organization.com" `
-Phone "+47-555-12345" `
-AlertAdmin `
-NotifyOnAlert
Why It Matters: Microsoft vil varsle deg direkte ved platform-level vulnerabilities eller detected compromise patterns som krever koordinert respons.
Microsoft Collaboration Procedures
When to Engage Microsoft Support:
- Azure platform-level incidents (tjenestefeil som påvirker security)
- Suspected compromise of Azure infrastructure itself (ikke kun customer workloads)
- Zero-day vulnerabilities discovered i Azure AI Services
- Large-scale coordinated attacks affecting multiple tenants
Escalation Path:
- Azure Support Ticket (Severity A for active security incidents)
- Microsoft Security Response Center (MSRC) for vulnerability disclosure
- Azure Security Response Team for platform-level compromise coordination
- Microsoft Account Team (TAM/CSA) for strategic incident response planning
Offentlig sektor (Norge)
Meldeplikt til Datatilsynet (GDPR)
Når melder man?
- Personopplysningsbrudd som "kan medføre høy risiko for fysiske personers rettigheter og friheter"
- AI-scenario: Model inversion-angrep som eksponerer treningsdata med personopplysninger
Tidsfrist: 72 timer fra virksomheten ble kjent med bruddet
Hva skal meldes:
- Beskrivelse av bruddet og omfang (antall berørte, kategorier personopplysninger)
- Kontaktopplysninger til personvernombudet
- Sannsynlige konsekvenser av bruddet
- Tiltak iverksatt eller foreslått for å håndtere bruddet
Azure-støtte:
- Microsoft Purview Compliance Manager — GDPR assessment templates og incident tracking
- Logic Apps automated notification — Pre-approved templates for Datatilsynet reporting
- Azure Policy compliance reports — Documentation av security controls for regulatory audit
Referanse: Datatilsynet — Meldeplikt ved personopplysningsbrudd
Varsling til NSM (Nasjonal sikkerhetsmyndighet)
Når skal man varsle NSM?
- Alvorlige IKT-sikkerhetshendelser i kritisk infrastruktur eller leverandører av digitale tjenester
- AI-scenario: Omfattende data poisoning-angrep mot AI-systemer i kritisk samfunnsfunksjon (helse, transport, finans)
Tidsfrist: Uten ugrunnet opphold etter at hendelsen er oppdaget
Hva skal meldes:
- Type hendelse og omfang
- Når hendelsen skjedde og ble oppdaget
- Konsekvenser for drift av tjenester
- Tiltak iverksatt
Referanse: NSM — Varsle sikkerhetshendelser
Sikkerhetsloven §§ 2-4 (Sikkerhetstruende hendelser)
Virkeområde: Statlige og kommunale virksomheter, samt private virksomheter som håndterer gradert informasjon
Hva skal meldes: Sikkerhetstruende hendelser som kan skade nasjonale sikkerhetsinteresser
AI-relevans: Model theft eller data exfiltration av gradert informasjon brukt i AI-treningsdata
Referanse: Lovdata — Sikkerhetsloven
Utredningsinstruksen (KMD)
Relevans for AI-prosjekter: Alle statlige utredninger må inkludere vurdering av sikkerhetsrisiko
Incident Response Implications:
- Lessons learned fra AI-incidents må integreres i fremtidige utredninger
- Root cause analysis skal dokumenteres strukturert
- Security control gaps skal rapporteres til beslutningstagere
Referanse: Regjeringen — Utredningsinstruksen
Norsk Compliance Checklist for AI Incident Response
- GDPR: Varsle Datatilsynet innen 72 timer ved personopplysningsbrudd
- NSM: Varsle uten ugrunnet opphold ved alvorlige IKT-hendelser (kritisk infrastruktur)
- Sikkerhetsloven: Meld sikkerhetstruende hendelser til NSM (gradert informasjon)
- Arkivloven: Bevare incident-dokumentasjon i minimum 10 år (statlige virksomheter)
- Forvaltningsloven: Sikre forsvarlig saksbehandling i incident response (dokumentasjonskrav)
- Anskaffelsesforskriften: Vurder leverandøransvar ved third-party AI-tjenester
- Personopplysningsloven: Gjennomfør DPIA før gjenoppretting av AI-tjenester med endrede risikoer
Kostnad og lisensiering
Azure-kostnader for Incident Response Infrastruktur
| Service | Typical Monthly Cost (NOK) | Notes |
|---|---|---|
| Microsoft Sentinel | 15 000 - 150 000 | Pay-per-GB ingested (ca. 20 NOK/GB), 100 GB/day = ~60k/month |
| Microsoft Defender for Cloud | 1 500 - 15 000 per server | Defender for Servers Plan 2: ~150 NOK/server/month |
| Azure Monitor Log Analytics | 5 000 - 50 000 | Pay-per-GB retention, first 5 GB/day free, then ~7 NOK/GB |
| Azure Storage (Immutable) | 500 - 5 000 | Forensic evidence storage, LRS ~0.20 NOK/GB/month |
| Logic Apps (Playbooks) | 1 000 - 10 000 | Standard tier ~0.50 NOK per 1000 actions |
| Microsoft Defender XDR | Included in M365 E5 | Or add-on ~35 NOK/user/month |
Total Estimated Range: 23 000 - 230 000 NOK/month (avhengig av scale og log volume)
Lisensieringskrav
| Capability | Required License | Included in |
|---|---|---|
| Microsoft Sentinel | Sentinel standalone | Or Microsoft 365 E5 Security |
| Defender for Cloud | Pay-per-resource | Or Microsoft Defender for Cloud (standalone) |
| Defender XDR | M365 E5 Security or E5 | Includes Defender for Endpoint, Identity, M365 |
| Microsoft Entra ID P2 | Microsoft Entra ID P2 | Required for PIM, Conditional Access risk-based policies |
| Azure Monitor | Pay-per-GB | No upfront license, consumption-based |
| Azure Automation | Free for first 500 minutes/month | Then ~0.015 NOK/minute |
Optimization Tips:
- Commitment Tiers: Microsoft Sentinel har commitment tiers (100/200/300 GB/day) med 15-50% discount
- Data Retention: Use tiered storage (Archive to Azure Blob Cold after 90 days) for compliance retention
- Alert Tuning: Reduce false positives → lower analyst time costs (often larger than tool costs)
- Shared Sentinel Workspace: Multi-tenant scenario for managed service providers
TCO Consideration: Build vs. Buy
DIY Incident Response (open-source SIEM + manual playbooks):
- Lower tool costs (~50% of Azure stack)
- Higher operational costs (3-5 FTEs for 24/7 SOC)
- Longer MTTD/MTTR (no native Azure integration)
Azure-Native Stack:
- Higher tool costs (as above)
- Lower operational costs (automation reduces manual work by 60-80%)
- Faster MTTD/MTTR (native integration, XDR correlation)
Recommendation for offentlig sektor: Azure-native stack for kritiske systemer (helse, finans), hybrid approach for less-critical workloads.
For arkitekten (Cosmo)
Spørsmål å stille klienten
-
Incident Response Maturity: "Har dere eksisterende incident response-planer, eller bygger vi fra scratch? Hvilke systemer er kritiske nok til å kreve 24/7 monitoring?"
Tips (per CAF Secure AI 2026-04): Bruk Azure Resource Graph til å bygge et komplett AI asset inventory som grunnlag for prioritering av monitoring-scope. (Verified MCP 2026-04)
-
Compliance Requirements: "Hvilke regulatoriske krav gjelder? GDPR (Datatilsynet 72h)? NSM-varsling? Sikkerhetsloven? Dette påvirker notification workflows og evidence retention."
-
Current Detection Capabilities: "Hvilke security tools er allerede i bruk? SIEM? EDR? Kan vi integrere, eller må vi deploye helt nye verktøy?"
-
AI-Specific Risks: "Hvilke AI-trusler bekymrer dere mest: data poisoning, model theft, prompt injection? Dette avgjør hvilke detection rules vi prioriterer."
-
Team Structure: "Hvem er incident responders? Har dere in-house SOC, eller skal vi planlegge for managed detection and response (MDR)?"
-
Automation Appetite: "Hvor komfortable er dere med automated containment? Kan vi auto-blokkere API keys, eller trengs alltid human approval?"
-
Budget and Licensing: "Hva er budsjettet for security tooling? Har dere allerede Microsoft 365 E5? Dette påvirker om vi kan bruke Defender XDR eller må bygge custom."
-
Evidence Retention: "Hvor lenge må dere bevare incident-beviser? 1 år? 6 år (HIPAA)? 10 år (Arkivloven)? Dette driver storage costs."
-
Training and Tabletop Exercises: "Når var siste gang teamet øvde på incident response? Trenger vi tabletop exercises for AI-spesifikke scenarios?"
-
Third-Party Dependencies: "Bruker dere third-party AI models (OpenAI, Hugging Face)? Hvordan håndterer vi incidents i vendor-supplied models?"
Fallgruver å unngå
-
"One-Size-Fits-All Playbooks": AI-incidents krever spesialiserte playbooks (data poisoning ≠ ransomware response). IKKE gjenbruk tradisjonelle cybersecurity-playbooks uten AI-tilpasning.
-
"Alert Overload Day 1": IKKE enable alle Sentinel analytics rules samtidig uten tuning. Start med high-fidelity AI-specific rules, tune in 2-4 uker før du legger til bredere coverage.
-
"Forensics as Afterthought": IKKE implementer detection uten samtidig å rigge immutable storage for evidence. Legal hold må være klart FØR første incident.
-
"Ignoring ML Supply Chain": IKKE glem å audit third-party models og training data providers. Backdoor attacks kommer ofte via supply chain.
-
"Manual-Only Response at Scale": IKKE stol på kun manuelle prosedyrer hvis du har > 10 AI models i production. Automated playbooks er essensielt for skalerbarhet.
-
"No Legal/Compliance Involvement": IKKE design incident response uten input fra legal og compliance teams. GDPR 72-timer notification må være baked in fra start.
-
"Forgetting Cloud Shared Responsibility": IKKE anta at Microsoft håndterer all incident response. Du er ansvarlig for data, models, applications — Microsoft for platform. Clarify hvem gjør hva.
-
"Testing Only Happy Paths": IKKE bare teste at playbooks kjører uten feil. Test også edge cases: Hva om Azure Logic Apps er nede? Hva om Key Vault er utilgjengelig?
Anbefalinger for ulike scenario
Scenario A: Startup med 1-2 ML models (pre-product/market fit)
- Anbefaling: Microsoft Defender for Cloud (basic) + Azure Monitor alerts, manual response procedures, ingen SIEM ennå
- Rationale: Keep costs low, focus på core product development, scale security når revenue kommer
- Investment: ~5 000 NOK/month
Scenario B: Scale-up med 10+ production models (Series A/B funded)
- Anbefaling: Microsoft Sentinel + Defender XDR, automated playbooks for common threats, 24/7 on-call rotation (not dedicated SOC)
- Rationale: Growing attack surface krever automation, men in-house SOC er fortsatt for dyrt
- Investment: ~50 000 NOK/month
Scenario C: Enterprise med kritisk AI infrastructure (finans, helse, offentlig sektor)
- Anbefaling: Full Azure-native incident response stack (Sentinel, Defender XDR, immutable storage, 24/7 SOC), quarterly red team exercises
- Rationale: Regulatory requirements, high business impact av downtime, zero tolerance for data breaches
- Investment: ~200 000 NOK/month + 3-5 FTEs (SOC team)
Scenario D: Offentlig virksomhet med begrenset budsjett (kommune, mindre statlig etat)
- Anbefaling: Shared Sentinel workspace (multi-tenant), Microsoft 365 E5 Security (inkluderer Defender XDR), outsourced SOC (managed services)
- Rationale: Compliance-driven (NSM, Datatilsynet), cost-conscious, benefit from shared infrastructure
- Investment: ~30 000 NOK/month (tools) + managed SOC contract
Kilder og verifisering
Microsoft Learn Documentation (Verified via MCP)
Incident Response Framework:
- Security Control: Incident Response — NIST-aligned incident response controls med Azure implementation guidance
- Architecture Strategies for Security Incident Response — Design patterns for Azure-native incident response
- Microsoft Security Incident Management — Microsoft's internal federated security response model
AI-Specific Security:
- Secure AI — Detect AI Security Threats — AI-focused threat detection and incident response procedures. Dekker: AI asset inventory (Azure Resource Graph), AI communication channel security (Managed Identities, Virtual Networks, APIM for MCP), data boundary definition (Microsoft Purview), DLP (Purview DLP + content filtering), og AI-spesifikk incident response (Defender for Cloud AI posture management). (Verified MCP 2026-04)
- Threat Modeling AI/ML Systems — STRIDE + MITRE ATLAS mapping for AI threat landscape
- AI/ML Pivots to SDL Bug Bar — Severity classification for AI-specific threats (data poisoning, model inversion, etc.)
Azure Security Tools:
- Microsoft Sentinel Playbooks — Automated incident response orchestration
- Microsoft Defender for Cloud — Cloud-native threat detection og security posture management
- Azure Monitor Incident Investigation — Centralized logging and forensics platform
Evidence Preservation:
- Azure Immutable Storage for Blobs — Legal hold and time-based retention policies
- Azure VM Snapshots — Point-in-time forensic evidence capture
- Azure Backup Overview — Automated backup with long-term retention
Compliance og Regulatory Frameworks
Norwegian Regulations:
- GDPR: Datatilsynet — Meldeplikt ved personopplysningsbrudd
- NSM: NSM — Varsle sikkerhetshendelser
- Sikkerhetsloven: Lovdata — Lov om nasjonal sikkerhet
International Standards:
- NIST SP 800-61 Rev. 2: Computer Security Incident Handling Guide
- MITRE ATLAS: Adversarial Threat Landscape for AI Systems
- OWASP Top 10 for LLM: Generative AI Security Risks
Konfidensnivå
Verified (High Confidence) — Alle Azure-native tools, services og incident response procedures er verifisert via Microsoft Learn MCP-research (februar 2026, re-verifisert april 2026). CAF Secure AI-dokumentet bekrefter: AI asset inventory via Azure Resource Graph, AI communication channel security (Managed Identities, Virtual Networks, APIM for MCP server-endepunkter), og Purview Insider Risk Management for prompt-basert data exfiltration-deteksjon. Prisestimater basert på offisiell Azure pricing, men kan variere ved currency fluctuation og regional pricing.
Baseline (Model Knowledge) — Generell incident response framework (NIST SP 800-61), MITRE ATT&CK for ML, og best practices for forensics/chain of custody basert på industry standards. Norwegian regulatory requirements verifisert via offentlige kilder (Datatilsynet, NSM, Lovdata).
Note: AI incident response er et raskt utviklende felt. Nye angrepsmetoder (f.eks. multimodal adversarial attacks, federated learning poisoning) kan kreve justerte detection rules og playbooks. Anbefaler kvartalsvise reviews av threat landscape og tool capabilities.
For Cosmo: Dette er et komplett utgangspunkt for å diskutere incident response-strategi med klienter. Start med maturity assessment, map til ett av de fire scenarioene (startup/scale-up/enterprise/offentlig), og tilpass playbooks basert på deres AI-specific risk profile. Husk: Incident response er ikke "set it and forget it" — kontinuerlig tuning og tabletop exercises er essensielt for å holde organisasjonen klar.