139 lines
7.1 KiB
Markdown
139 lines
7.1 KiB
Markdown
# Norwegian Regulatory Context for AI Security
|
|
|
|
Reference material for compliance-aware scanning and CAISS presentations. Maps Norwegian regulatory
|
|
bodies, frameworks, and guidance to plugin capabilities.
|
|
|
|
**Last verified:** 2026-04-10
|
|
|
|
---
|
|
|
|
## Datatilsynet (Norwegian Data Protection Authority)
|
|
|
|
### AI Regulatory Sandbox
|
|
|
|
Datatilsynet operates a regulatory sandbox for AI since 2020, now in its fifth round (2025).
|
|
Focus: GDPR compliance in AI systems, generative AI projects.
|
|
|
|
**Relevance to plugin:**
|
|
- Sandbox projects produce public reports with privacy-by-design requirements
|
|
- Plugin's posture scanner evaluates credential protection and secrets management — directly relevant
|
|
to GDPR data protection obligations
|
|
- Plugin's audit trail capability (v6.0) provides the record-keeping evidence sandbox evaluations require
|
|
|
|
**Participation criteria:** AI-based project, specific privacy question, Norwegian-based organization,
|
|
societal purpose beyond the developer.
|
|
|
|
**Source:** https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/
|
|
|
|
### GDPR + AI Act Intersection
|
|
|
|
The AI Act supplements GDPR — both apply simultaneously to AI systems processing personal data.
|
|
Datatilsynet is designated as market surveillance authority for certain AI uses (e.g., law enforcement).
|
|
|
|
---
|
|
|
|
## NSM (Nasjonal Sikkerhetsmyndighet)
|
|
|
|
### Grunnprinsipper for IKT-sikkerhet (ICT Security Principles)
|
|
|
|
NSM's ICT security principles (v2.1) provide a comprehensive framework for securing information systems.
|
|
Applicable to all public and private organizations. Four main principle areas:
|
|
1. Identify and map
|
|
2. Protect and maintain
|
|
3. Detect
|
|
4. Respond and recover
|
|
|
|
**Relevance to plugin:**
|
|
- **Identify and map:** Plugin's posture scanner identifies AI-specific security gaps; AI-BOM generator
|
|
maps AI components (models, MCP servers, plugins, knowledge bases)
|
|
- **Protect and maintain:** 8 runtime hooks provide automated protection; policy-as-code enables
|
|
distributable security configuration
|
|
- **Detect:** Prompt injection scanning, trifecta detection, behavioral drift monitoring, supply chain
|
|
checks — all contribute to NSM's detect principle
|
|
- **Respond and recover:** Clean command provides remediation; baseline diff tracks security drift over time
|
|
|
|
**Source:** https://nsm.no/regelverk-og-hjelp/rad-og-anbefalinger/grunnprinsipper-for-ikt-sikkerhet/
|
|
|
|
### AI-Specific Guidance
|
|
|
|
NSM has not yet published dedicated AI security guidelines (as of April 2026). The ICT security
|
|
principles are technology-neutral and apply to AI systems through their general security requirements.
|
|
NSM's annual threat assessment (Risiko) covers emerging technology threats including AI.
|
|
|
|
---
|
|
|
|
## Digdir (Digitaliseringsdirektoratet)
|
|
|
|
### AI Guidance for Public Sector
|
|
|
|
Digdir provides guidance on responsible development and use of AI in public sector:
|
|
- Principles: transparency, explainability, accountability, human oversight, privacy, equal treatment
|
|
- Aligned with EU AI Act requirements
|
|
- Government target: 80% of public entities adopt AI by 2026
|
|
|
|
**Relevance to plugin:**
|
|
- **Transparency:** Posture reports, scan results, and AI-BOM provide transparency tooling
|
|
- **Human oversight:** Human Review Requirements category (posture scanner ID 7) directly measures
|
|
human oversight controls; Rule of Two enforces human-in-the-loop for dangerous patterns
|
|
- **Accountability:** Audit trail provides event-level accountability; SARIF output enables CI/CD
|
|
integration for automated compliance checking
|
|
|
|
**Source:** https://www.digdir.no/kunstig-intelligens/veiledning-ki-i-offentlig-sektor/4132
|
|
**Source:** https://www.digdir.no/kunstig-intelligens/rad-ansvarlig-utvikling-og-bruk-av-kunstig-intelligens-i-offentlig-sektor/4272
|
|
|
|
### KI Norge (AI Norway)
|
|
|
|
Expanded expert environment within Digdir. Serves as driving force, advisory service, and connector
|
|
between AI players in public sector, industry, research, and academia. Will host a national
|
|
regulatory sandbox for controlled testing under the AI Act.
|
|
|
|
---
|
|
|
|
## Norwegian AI Act Implementation
|
|
|
|
### Timeline
|
|
|
|
- **June 2025:** Ministry of Digitalisation published draft Artificial Intelligence Act
|
|
- **September 2025:** Public consultation deadline
|
|
- **August 2026 (expected):** Norwegian AI Act enters into force
|
|
|
|
### Supervisory Structure
|
|
|
|
- **Nkom (Nasjonal kommunikasjonsmyndighet):** National coordinating market surveillance authority,
|
|
EU contact point
|
|
- **Sectoral authorities:** Domain-specific market surveillance for high-risk AI
|
|
- **Datatilsynet:** Market surveillance for certain uses (law enforcement)
|
|
- **Digdir/KI Norge:** Guidance, capacity building, regulatory sandbox
|
|
|
|
**Source:** https://regulations.ai/regulations/norway-ai-act-2026
|
|
**Source:** https://www.regjeringen.no/en/whats-new/gjor-norge-klar-for-trygg-og-innovativ-ki-bruk/id3093081/
|
|
|
|
---
|
|
|
|
## Plugin Capability Mapping to Norwegian Requirements
|
|
|
|
| Norwegian Requirement | Regulatory Source | Plugin Capability | Coverage |
|
|
|----------------------|-------------------|-------------------|----------|
|
|
| Risk management for AI systems | AI Act Art. 9, NSM grunnprinsipper | Posture scanner (13+3 categories), threat-model command | Partial |
|
|
| Data protection in AI | GDPR, Datatilsynet sandbox | Secrets protection hooks, path guarding, credential scanning | Full |
|
|
| Transparency and explainability | Digdir principles, AI Act Art. 13 | Scan reports, posture reports, AI-BOM | Partial |
|
|
| Human oversight | Digdir principles, AI Act Art. 14 | Human Review Requirements (PST-07), Rule of Two, deny-first config | Full |
|
|
| Cybersecurity | AI Act Art. 15, NSM grunnprinsipper | All 8 hooks, 10 scanners, prompt injection hardening | Full |
|
|
| Record-keeping | AI Act Art. 12, NSM detect principle | Audit trail (JSONL), session logging, baseline diffs | Full (v6.0) |
|
|
| Quality management | AI Act Art. 17 | Test suite (1147+ tests), posture scanner, scan-orchestrator | Partial |
|
|
| Supply chain integrity | AI Act Art. 15, NSM identify principle | Supply chain hooks, dep audit scanner, AI-BOM | Full |
|
|
| Incident response | NSM respond principle | Clean command, baseline diff, watch/cron monitoring | Partial |
|
|
|
|
---
|
|
|
|
## Verification Log
|
|
|
|
| Claim | Source | URL |
|
|
|-------|--------|-----|
|
|
| Datatilsynet sandbox since 2020, fifth round 2025 | Datatilsynet website | https://www.datatilsynet.no/en/regulations-and-tools/sandbox-for-artificial-intelligence/ |
|
|
| NSM Grunnprinsipper v2.1 | NSM website | https://nsm.no/regelverk-og-hjelp/rad-og-anbefalinger/grunnprinsipper-for-ikt-sikkerhet/ |
|
|
| Digdir AI guidance for public sector | Digdir website | https://www.digdir.no/kunstig-intelligens/veiledning-ki-i-offentlig-sektor/4132 |
|
|
| 80% public sector AI adoption target by 2026 | Shifter (citing government plan) | https://www.shifter.no/nyheter/regjeringen-80-prosent-av-offentlige-virksomheter-skal-bruke-ai/443164 |
|
|
| Norwegian AI Act draft June 2025, expected August 2026 | Regulations.AI | https://regulations.ai/regulations/norway-ai-act-2026 |
|
|
| Nkom as coordinating authority | Government press release | https://www.regjeringen.no/en/whats-new/gjor-norge-klar-for-trygg-og-innovativ-ki-bruk/id3093081/ |
|
|
| NSM has no dedicated AI security guidelines (April 2026) | NSM website review — no AI-specific publication found | https://nsm.no/ |
|