Last batch in HIGH bucket. Combined with 82bd665 (critical 9 + high batch 1, 21 files), this finishes the critical+high KB-refresh sweep for v1.12.0.
Substantive edits (3 files):
- security-copilot-integration.md: M365 E5/E7 inclusion auto-provisioning, agents-first landing experience, role-based onboarding (Verified MCP 2026-05)
- entra-agent-id-zero-trust.md: Ignite 2025-utvidelser — Conditional Access for agenter, Risky agents, 3 nye Agent ID-roller, Microsoft Agent Identity Platform, Copilot Studio blueprint principal
- ai-center-of-excellence-setup.md: Ny "Oppdateringer 2026-05"-seksjon — tre-roller-modell (platform/workload/CoE), agent-ferdighetsområder, sentralisert→rådgivende operasjonsmodell
Date-bump (20 files):
- HIGH-bucket filer der MCP-fetch viste kosmetiske endringer (forrige sesjons lærdom replikert)
Tests: validate-plugin.sh PASS 219.
27 KiB
Responsible AI Policy Development - Creating Organizational Standards
Last updated: 2026-05 Status: GA Category: Responsible AI & Governance
Introduksjon
Responsible AI-policyer er fundamentet for etisk, transparent og ansvarlig AI-implementering på tvers av organisasjoner. Disse policyene oversetter abstrakte prinsipper til konkrete krav som utviklingsteam kan implementere, og sikrer at AI-systemer opererer i tråd med organisasjonens verdier, regulatoriske krav og etiske standarder.
Uten klare Responsible AI-policyer står organisasjoner overfor betydelig risiko: omdømmeskade fra partiske eller skadelige AI-outputs, regulatoriske bøter fra manglende compliance med fremvoksende AI-lover, og erosjon av stakeholder-tillit som undergraver AI-adopsjonsarbeidet.
Microsoft Responsible AI Standard definerer hvordan organisasjoner kan integrere ansvarlig AI i engineering-team, AI-utviklingssyklusen og tooling. Standarden dekker seks domener med 14 mål som skal redusere AI-risiko og tilhørende skader. Policy-utvikling må reflektere disse domenene og oversette dem til operasjonelle retningslinjer.
Confidence: Verified (MCP microsoft-learn 2026-02)
Kjernekomponenter
1. Responsible AI-prinsipper som fundament
Alle organisatoriske AI-policyer skal bygge på etablerte rammeverk:
| Prinsipp | Definisjon | Policy-implikasjon |
|---|---|---|
| Accountability | Organisasjonen er ansvarlig for hvordan teknologien opererer | Tydelige rolledefinisjoner, godkjenningsprosesser, incident response-prosedyrer |
| Transparency | Åpenhet om hvordan AI-systemer bygges og tar beslutninger | Dokumentasjonskrav, bruker-disclosure, forklarbare modeller |
| Fairness | AI-systemer skal behandle alle rettferdig | Bias-testing, impact assessments, jevnlige audits |
| Reliability & Safety | Systemer skal operere som designet og motstå misbruk | Testing-krav, safety mitigations, red teaming |
| Privacy & Security | Beskyttelse av data og personvern | Data governance, encryption, access controls |
| Inclusiveness | Inkludere hele spekteret av communities | Diverse training data, accessibility requirements |
Microsoft-referanse: Microsoft Responsible AI Standard implementerer disse prinsippene gjennom konkrete krav per domene. Eksempel: Privacy & Security-domenet krever at team implementerer differential privacy, data minimization og secure model deployment.
2. Governance-struktur
Effektiv policy-enforcement krever en klar organisasjonsstruktur:
┌─────────────────────────────────────────┐
│ Executive Sponsorship │
│ (CEO, CTO, Board Committee) │
└──────────────┬──────────────────────────┘
│
┌──────────────┴──────────────────────────┐
│ Responsible AI Council/CoE │
│ (Cross-functional: Legal, Security, │
│ Engineering, Policy, Product) │
└──────────────┬──────────────────────────┘
│
┌───────┴───────┐
│ │
┌──────┴──────┐ ┌─────┴──────┐
│ Research │ │ Engineering│
│ Team │ │ Teams │
│ │ │ │
│ Risk │ │ Policy │
│ Discovery │ │ Implement- │
│ │ │ ation │
└─────────────┘ └────────────┘
Nøkkelroller:
- AI Center of Excellence (CoE): Sentraliserer ansvar for governance, definerer standarder, gir konsultativ støtte (ikke gatekeeper)
- Research Team: Utfører risk discovery basert på organisatoriske retningslinjer, industristandarder, lover og red-team tactics
- Policy Team: Utvikler workload-spesifikke policyer, inkorporerer parent organization guidelines og regulatoriske krav
- Engineering Team: Implementerer policyer i prosesser og deliverables, validerer og tester for adherence
Office of Responsible AI (ORA) - Microsofts modell:
- Setter company-wide interne policyer
- Definerer governance-strukturer
- Tilbyr ressurser for AI-praksisadopsjon
- Reviewer sensitive use cases
- Hjelper forme offentlig policy rundt AI
3. Policy-kategorier og innhold
En komplett Responsible AI-policy skal dekke:
| Policy-område | Nøkkelinnhold | Eksempel-krav |
|---|---|---|
| Model Selection & Onboarding | Kriterier for modellvalg, vetting-prosess, godkjenningsprosedyrer | "Alle modeller må vurderes mot risk tolerance før onboarding. Sandbox-testing påkrevd. Production catalog må godkjennes av CoE." |
| Third-party Tools & Data | Vetting av eksterne verktøy, data privacy-standarder, data quality-krav | "Eksterne datasett må gjennomgå privacy review. Golden dataset skal etableres for testing. Sensitive/public data skal separeres." |
| Model Maintenance & Monitoring | Retraining frequency, performance monitoring, drift detection | "High-risk modeller: quarterly retraining. Performance degradation triggers mandatory review." |
| Regulatory Compliance | Regional requirements, compliance frameworks, audit procedures | "GDPR compliance påkrevd for EU-data. ISO/IEC 42001 audit annually. Data residency per region." |
| User Conduct | Acceptable use policies, misuse detection, feedback mechanisms | "AI må identifisere seg som AI. Users kan rapportere concerns. Misuse triggers automatic review." |
| Integration & Lifecycle | Integration security, transition planning, decommissioning | "AI-workloads må ha documented integration points. Rollback procedures mandatory. Sunset plans required." |
Confidence: Verified (MCP microsoft-learn, NIST AI RMF alignment)
Arkitekturmønstre
Pattern 1: Centralized Standards, Distributed Implementation
Problem: Hvordan balansere konsistens med innovasjonsfrihet?
Løsning: CoE definerer minimum standards, business units implementerer med kontekstuell fleksibilitet.
Policy Lifecycle:
1. CoE utvikler baseline policy → 2. BU tilpasser til domene →
3. Implementation i workflows → 4. Continuous monitoring →
5. Feedback til CoE for policy evolution
Eksempel (Microsoft Foundry):
- CoE definerer: "Alle production AI agents må ha content safety filters"
- BU1 (Customer Service): Implementerer strict filters for customer-facing chatbots
- BU2 (Internal HR): Implementerer moderate filters for employee assistance
- Begge rapporterer filter effectiveness til CoE quarterly
Pattern 2: Checkpoint-based Governance
Problem: Hvordan sikre compliance uten å bremse development velocity?
Løsning: Embed governance checkpoints på kritiske milepæler i AI-utviklingssyklusen.
| Lifecycle Stage | Checkpoint | Required Artifacts | Approval Authority |
|---|---|---|---|
| Ideation | Responsible AI Impact Assessment | Risk assessment, ethical considerations | Project Lead |
| Design | Architecture review | Data sources, model selection, integration points | CoE Representative |
| Development | Bias & Safety testing | Test results, mitigation strategies | Security + CoE |
| Pre-launch | Compliance sign-off | Regulatory checklist, transparency materials | Legal + CoE |
| Post-deployment | Quarterly audit | Performance metrics, incident reports | CoE |
Automation: Scanning tools for biased training data, inappropriate content generation, privacy violations køres kontinuerlig parallelt med manual reviews.
Pattern 3: Risk-tiered Policy Enforcement
Problem: Ikke alle AI-systemer krever samme governance-nivå.
Løsning: Klassifiser AI-workloads etter risiko, tildel enforcement-nivå.
| Risk Tier | Characteristics | Policy Enforcement | Example Systems |
|---|---|---|---|
| Critical | Customer-facing, consequential decisions, regulated domains | Full CoE review, external audit, mandatory red teaming | Credit scoring, medical diagnosis |
| High | Internal decisions, sensitive data, significant impact | CoE sign-off, internal audit, bias testing | HR recruitment, employee performance |
| Medium | Automation, limited impact, supervised operation | Automated checks, spot audits | Document classification, translation |
| Low | Personal productivity, sandboxed, no external impact | Self-certification, annual review | Code completion, personal assistants |
Microsoft Enterprise AI Services Code of Conduct: Definerer mandatory requirements for alle applications built with Microsoft AI Services, inkludert fraud detection, input/output controls, AI disclosure, watermarking for video, testing, feedback channels, human oversight.
Pattern 4: Ethical by Design
Problem: Hvordan sikre etiske hensyn fra dag én?
Løsning: Integrer ethical assessments i development tools og workflows.
Toolkit-elementer:
- AI Impact Assessment Template: Strukturert evaluering av fairness, privacy, safety, inclusiveness
- Bias Testing Checklist: Per Microsoft Responsible AI Dashboard (Azure Machine Learning)
- Transparency Feature Library: Code templates for explainability, audit logging, user disclosure
- Training Programs: Mandatory for developers, covering both technical implementation og "why" bak krav
Microsoft-verktøy:
- Responsible AI Dashboard (Azure ML): Fairness assessment, bias detection, model explainability
- Azure AI Foundry evaluation tools: Safety assessment, hallucination detection, bias pre-deployment
- Azure AI Content Safety: Harmful text/image filtering
- PYRIT (Python Risk Identification Toolkit): Red teaming for adversarial scenarios
Confidence: Verified (MCP microsoft-learn)
Beslutningsveiledning
Decision Tree: Når trenger du nye policyer?
Start: New AI initiative or capability?
│
├─ Yes → Er det dekket av eksisterende policy?
│ │
│ ├─ Yes → Apply existing policy + document deviation if needed
│ │
│ └─ No → Risk assessment høy eller medium?
│ │
│ ├─ Yes → Develop new policy (full CoE process)
│ │
│ └─ No → Extend existing policy (lightweight review)
│
└─ No → Regular policy review cycle (quarterly high-risk, annual low-risk)
Valg av Framework
| Scenario | Framework-anbefaling | Rationale |
|---|---|---|
| Ny til AI governance | Microsoft Responsible AI Standard + NIST AI RMF | Comprehensive, aligned with enterprise IT practices, regulatory recognition |
| Regulated industry (finans, helse) | NIST AI RMF + ISO/IEC 42001 | Audit-ready, compliance-focused, industry standard |
| EU operations | EU AI Act compliance framework + Microsoft Standard | Regulatory requirement, risk classification alignment |
| Public sector (Norge) | NIST AI RMF + Microsoft Standard + national guidelines | Public trust requirement, transparency emphasis |
| Rapid deployment | Microsoft Foundry built-in governance + lightweight internal policy | Accelerates time-to-value, reduces policy overhead |
Policy Enforcement Strategy
| Enforcement Method | When to Use | Microsoft Tools |
|---|---|---|
| Automated | Repeatable checks (bias, content safety, compliance rules) | Azure Policy, Microsoft Purview, built-in filters |
| Manual | Complex scenarios requiring judgment, high-risk approvals | CoE reviews, ethics committee sign-offs |
| Hybrid | Most enterprise scenarios | Automated screening + human review for flagged cases |
Azure Policy Initiatives for AI:
- Azure OpenAI: Guardrails initiative
- Azure Machine Learning: ML guardrails
- Azure AI Search: Cognitive Services guardrails
- Azure AI Bot Service: Bot guardrails
Confidence: Verified (MCP microsoft-learn)
Integrasjon med Microsoft-stakken
Azure AI Foundry
Built-in Governance Capabilities:
| Feature | Policy Support | Configuration |
|---|---|---|
| Content Safety | Harmful content filtering (text, image, multimodal) | Azure AI Content Safety - konfigurerbare severity thresholds |
| Evaluation Tools | Pre-deployment safety, hallucination, bias testing | Foundry evaluation SDK - integreres i CI/CD |
| Model Registry | Versioning, approval workflows, provenance tracking | Azure ML Model Registry - RBAC-controlled |
| Monitoring | Model drift, performance degradation, quality metrics | Foundry Agent Service metrics - alert rules |
| Data Governance | Data lineage, sensitivity labels, DLP policies | Microsoft Purview integration |
Policy Implementation Example (Foundry):
# Policy: All production models must have content safety filters
Implementation:
- Step 1: Enable Azure AI Content Safety service
- Step 2: Configure content filters per risk tier (strict/moderate/permissive)
- Step 3: Integrate filter API in application code
- Step 4: Log all filter events to Azure Monitor
- Step 5: Alert on high-severity content attempts
- Step 6: Quarterly review of filter effectiveness
Enforcement:
- Azure Policy: Deny deployment without content safety integration
- CI/CD gate: Require content safety tests to pass
- Runtime: Automatic filtering + logging
Copilot Studio
Governance Features:
- Data location controls: Respect data sovereignty requirements
- Compliance certifications: ISO, SOC, HIPAA
- Analytics dashboard: Monitor token usage, identify high-cost skills
- Security & governance best practices: Copilot Studio guidance
Policy Implementation Example (Copilot Studio):
Policy: Customer service copilots must comply with GDPR
Implementation:
- Data location: EU regions only
- Data retention: 30 days max for conversation logs
- User rights: Support deletion requests via API
- Transparency: Copilot identifies as AI in first message
- Audit: Log all data access events to Azure Monitor
Enforcement:
- Configuration: Set data location to EU in Copilot Studio settings
- Code: Implement deletion API in backend
- Testing: Verify GDPR compliance in pre-production
- Monitoring: Alert on data location policy violations
Microsoft Purview
AI Governance Capabilities:
- Compliance Manager: Translate regulations (EU AI Act, etc.) into controls, assess compliance posture
- Purview APIs: Integrate compliance automation into agent workflows
- Data classification: Sensitivity labels, data loss prevention
- Unified governance: Catalog AI-related data assets
Integration Pattern:
AI Workload → Microsoft Purview → Compliance Dashboard
│ │ │
│ ├─ Data classification
│ ├─ Policy enforcement
│ └─ Audit logging
│
└─ Purview API → Automated compliance checks in CI/CD
Policy Enforcement with Azure Policy
Example: Restrict AI model deployments to approved registry
{
"policyName": "Require approved AI models",
"effect": "Deny",
"scope": "Production subscriptions",
"rule": {
"allowedPublishers": ["Microsoft", "Internal CoE"],
"approvedAssetIds": ["model-id-1", "model-id-2"],
"requireSecurityScan": true,
"requireCoeApproval": true
}
}
Enforcement flow:
- Developer attempts model deployment
- Azure Policy evaluates against approved list
- If not approved: Deployment blocked, alert sent to CoE
- If approved: Deployment proceeds, logged for audit
Confidence: Verified (MCP microsoft-learn)
Offentlig sektor (Norge)
Særskilte hensyn for norsk offentlig sektor
Offentlig sektor i Norge har strengere krav til transparens, likeverdighet og offentlig tillit enn privat sektor. Responsible AI-policyer må reflektere dette.
| Prinsipp | Offentlig sektor-tilpasning | Policy-krav |
|---|---|---|
| Transparency | Rett til innsyn i offentlige beslutninger (Offentlighetsloven) | AI-beslutninger må kunne forklares til publikum. Dokumenter modellvalg, training data sources, decision logic. |
| Fairness | Likebehandlingsprinsippet | Mandatory bias testing før produksjon. Jevnlige audits for ulik behandling basert på kjønn, alder, geografi, etc. |
| Accountability | Forvaltningsrettslige krav til begrunnelse | Mennesker må ha siste ord i konsekvensfulle beslutninger. AI er beslutningsstøtte, ikke beslutningstaker. |
| Privacy | Personopplysningsloven (GDPR + nasjonale regler) | Data minimization, purpose limitation, storage limitation. Særlig vern for sensitive personopplysninger. |
| Inclusiveness | Universell utforming (Diskriminerings- og tilgjengelighetsloven) | AI-løsninger må være tilgjengelige for alle, inkludert personer med funksjonsnedsettelser. |
| Security | Sikkerhetsloven, NIS2-direktivet | Særlige krav til informasjonssikkerhet for kritisk infrastruktur og offentlige tjenester. |
Policy-template for offentlig sektor
Minimumskrav for AI-systemer i norsk offentlig forvaltning:
-
Før implementering:
- Personvernkonsekvensvurdering (DPIA) hvis høy risiko
- Etisk vurdering (Responsible AI Impact Assessment)
- Juridisk vurdering (compliance med forvaltningsloven, personopplysningsloven)
- Universell utforming-sjekk
-
Under implementering:
- Testing for bias mot ulike befolkningsgrupper
- Sikkerhetstesting (penetration testing, red teaming)
- Dokumentasjon av modellvalg og training data
- Etablering av human oversight-prosedyrer
-
Etter implementering:
- Kontinuerlig monitorering av bias og performance
- Klageordning for AI-baserte beslutninger
- Jevnlige audits (minimum årlig)
- Transparensrapportering til publikum
-
Dekommisjonering:
- Sikker sletting av personopplysninger
- Dokumentasjon av system lifecycle for arkiv
- Evaluering av lessons learned
Samarbeid med Digdir og DFØ
Relevante nasjonale rammeverk:
- Digdirs veileder for kunstig intelligens i offentlig sektor
- DFØs anbefalinger for anskaffelse av AI-løsninger
- NSM (Nasjonal sikkerhetsmyndighet) sin veiledning for AI-sikkerhet
Anbefaling: Policy-utvikling bør koordineres med nasjonale myndigheter for å sikre alignment med fremvoksende nasjonale standarder.
Confidence: Baseline (modellkunnskap om norsk lov + Verified Microsoft frameworks)
Kostnad og lisensiering
Kostnadskomponenter for Policy-program
| Komponent | Estimat (årlig) | Notater |
|---|---|---|
| Governance Team (CoE) | 3-8 FTE (NOK 2.5M - 6M) | Avhenger av organisasjonsstørrelse. Inkluderer policy experts, legal, security, engineering representatives. |
| Training Program | NOK 500K - 2M | Mandatory training for developers, testing/certification, ongoing workshops. |
| Tools & Platform | NOK 300K - 1.5M | Microsoft Purview, Azure Policy, monitoring tools, third-party audit tools. |
| External Audits | NOK 500K - 2M | Annual compliance audits, specialized red teaming, ethical reviews. |
| Documentation & Compliance | NOK 200K - 800K | Technical writing, legal documentation, transparency reporting. |
| Total (medium org) | NOK 4M - 12M | Typical range for organization med 500-2000 employees. |
ROI-betraktninger:
- Risk mitigation: En enkelt regulatory penalty kan koste NOK 10M+ (GDPR fines up to 4% of global revenue)
- Reputation protection: Omdømmeskade fra AI-incident kan påvirke customer trust og revenue
- Operational efficiency: Automated governance reduserer manual review overhead over tid
- Competitive advantage: Strong responsible AI posture kan være differentiator i regulated markets
Lisensiering for Microsoft Governance Tools
| Tool | Lisensmodell | Relevans for Policy |
|---|---|---|
| Azure Policy | Inkludert i Azure subscription | Policy enforcement, compliance monitoring |
| Microsoft Purview | Per GB data + per user | Data governance, compliance manager, sensitivity labeling |
| Azure AI Foundry | Pay-as-you-go (compute, storage, API calls) | Evaluation tools, content safety, model registry |
| Copilot Studio | Per user/month or per session | Copilot governance features |
| Azure Monitor | Per GB ingested + retention | Logging, alerting for policy violations |
| Microsoft Defender for Cloud | Per resource | Security posture, AI threat protection |
Optimalisering:
- Start med built-in Azure Policy og gratis tier av Purview
- Scale opp Purview når data governance maturity øker
- Bruk reservations for Azure compute til AI workloads (savings up to 72%)
- Konsolider logging i Azure Monitor for cost efficiency
Confidence: Baseline (typiske kostnader + Verified lisensmodeller)
For arkitekten (Cosmo)
Når anbefale policy-utvikling?
Strong signals:
- Kunde nevner "compliance", "regulatory requirements", "audit", "governance"
- Multiple AI initiatives på tvers av business units (risk for shadow AI)
- Regulated industry (finans, helse, offentlig sektor)
- Customer-facing AI med consequential decisions
- Eksisterende data governance program som skal utvides til AI
Weak signals:
- Enkelt intern AI-pilot med lav risiko
- Organization har under 50 ansatte (kan starte med lightweight policy)
- Proof-of-concept phase (for tidlig for comprehensive policy)
Conversation Flow
-
Forstå kontekst:
- "Har dere eksisterende data governance eller compliance-program?"
- "Hvilke regulatoriske krav er dere underlagt?"
- "Hvor mange AI-initiativer planlegger dere neste 12 måneder?"
-
Assess maturity:
- Level 1 (Ad hoc): Ingen formal policy, developers lager egne regler → Anbefal starter-policy based on Microsoft Standard
- Level 2 (Repeatable): Noen policies per prosjekt, inkonsistent enforcement → Anbefal sentralisert CoE
- Level 3 (Defined): Formal policy exists, men ikke integrert i workflows → Anbefal checkpoint-based governance
- Level 4 (Managed): Policy enforced, måles regelmessig → Anbefal continuous improvement + automation
- Level 5 (Optimizing): Automated enforcement, predictive risk management → Anbefal industry leadership role
-
Anbefal approach:
- Quick start (1-3 måneder): Adopt Microsoft Responsible AI Standard as baseline, create lightweight policy doc, establish CoE (2-3 personer)
- Full program (6-12 måneder): Comprehensive policy development, training program, tool integration, pilot + scale
- Ongoing (annual): Policy review cycle, external audits, continuous improvement
Red Flags
- Kunde vil "skip governance to move fast" → Risk for regulatory penalty, explain business case for policy
- "Our developers will handle it" → Shadow AI risk, explain need for centralized standards
- "We'll do policy after deployment" → Rearchitecture risk, explain cost of retrofitting compliance
- "We don't need external audits" → Bias blindness risk, explain value of independent review
Integration Points
Connect to other skills:
- Security Assessment: Policy enforcement er prerequisite for security controls
- Cost Estimation: Include governance costs in TCO
- ADR: Policy decisions bør dokumenteres som ADRs
- Migration Planning: Policy compliance kan påvirke migration strategy
Elevate to specialist når:
- Customer trenger legal opinion på regulatory compliance (legal counsel)
- Deep dive på specific compliance framework (ISO/IEC 42001 auditor)
- Teknisk implementation av advanced governance patterns (Azure Policy specialist)
Output Format for Policy Recommendations
## Responsible AI Policy Recommendation
**Organization Profile:**
- Size: [employees]
- Industry: [regulated/non-regulated]
- AI Maturity: [Level 1-5]
- Current Governance: [none/basic/advanced]
**Recommended Approach:**
[Quick start / Full program / Custom]
**Key Policy Areas:**
1. [Policy area 1] - Priority: [High/Medium/Low]
2. [Policy area 2] - Priority: [High/Medium/Low]
...
**Implementation Roadmap:**
- Month 1-3: [activities]
- Month 4-6: [activities]
- Month 7-12: [activities]
**Estimated Investment:**
- Team: [FTE]
- Tools: [NOK]
- External: [NOK]
- Total Year 1: [NOK]
**Microsoft Tools Recommended:**
- [Tool 1]: [purpose]
- [Tool 2]: [purpose]
**Success Metrics:**
- [Metric 1]: [target]
- [Metric 2]: [target]
**Next Steps:**
1. [Actionable step 1]
2. [Actionable step 2]
Confidence-signalering:
- Policy frameworks fra Microsoft/NIST: "Verified"
- Implementation patterns: "Verified"
- Cost estimates: "Baseline (typical ranges)"
- Norwegian public sector adaptations: "Baseline (general compliance knowledge) + Verified (Microsoft frameworks)"
Kilder og verifisering
Verified (MCP microsoft-learn 2026-02):
- Establishing responsible AI policies for AI agents across organizations
- Govern AI
- Microsoft Responsible AI Standard
- Artificial Intelligence overview - Microsoft Compliance
- Microsoft Enterprise AI Services Code of Conduct
- Governance and security for AI agents across the organization
- Create your AI strategy - Responsible AI
- Responsible AI in Azure workloads
- Govern Azure platform services (PaaS) for AI
Baseline (modellkunnskap):
- NIST AI Risk Management Framework (AI RMF)
- ISO/IEC 42001 AI Management System
- EU AI Act compliance framework
- Norwegian public sector regulations (Offentlighetsloven, Personopplysningsloven, Forvaltningsloven)
MCP Calls: 4 (microsoft_docs_search x3, microsoft_docs_fetch x2) Unique Sources: 9 Microsoft Learn URLs Research Date: 2026-02-04