docs(architect): KB follow-up — batch 3 content updates

Additional factual updates from batch 3 research:

- responsible-ai-training-awareness.md: module renamed
  "Azure AI Studio" → "Microsoft Foundry" (3 occurrences)
- transparency-documentation-standards.md: ISO/IEC 42001 scope expanded
  to include Copilot Studio, Microsoft Foundry, Security Copilot,
  GitHub Copilot, Dragon Copilot
- ai-act-compliance-guide.md: same ISO 42001 scope expansion
- human-in-the-loop-oversight.md: AI approval stages in Copilot Studio
  (GPT-o3 as AI approver, new Human in the loop connector)
- continuous-improvement-feedback-loops.md: MLflow 3 Feedback vs
  Expectation assessment types, Genie Code trace analysis

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Kjell Tore Guttormsen 2026-04-09 22:43:12 +02:00
commit 2dc825b3cb
5 changed files with 12 additions and 7 deletions

View file

@ -70,7 +70,7 @@ Providers av høyrisiko-systemer (de som utvikler/markedsfører) må oppfylle **
| **Transparency** | Brukere skal forstå systemets kapabiliteter og begrensninger | Transparency notes, model cards |
| **Human Oversight** | Mekanismer for human-in-the-loop i kritiske beslutninger | Azure Logic Apps, Power Automate approval workflows |
| **Accuracy, Robustness, Security** | Høy presisjon, resiliens mot feil, cybersecurity | Azure AI Content Safety, adversarial testing (PyRIT) |
| **Quality Management System** | ISO-lignende kvalitetsstyring for hele utviklingsløpet | ISO 42001:2023 (Microsoft sertifisert for M365 Copilot) |
| **Quality Management System** | ISO-lignende kvalitetsstyring for hele utviklingsløpet | ISO 42001:2023 (Microsoft sertifisert for M365 Copilot, Copilot Studio, Microsoft Foundry, Security Copilot, GitHub Copilot, Dragon Copilot) *(Verified MCP 2026-04)* |
| **Conformity Assessment** | Pre-deployment vurdering (intern eller ekstern) | Azure AI Foundry evaluation metrics, Compliance Manager |
| **CE-merking** | Registrering i EU database før markedsføring | (Gjelder ikke SaaS-tjenester fra Microsoft) |
| **Post-market Monitoring** | Kontinuerlig overvåking av performance i produksjon | Microsoft Defender for Cloud AI threat protection |

View file

@ -30,8 +30,12 @@ Microsoft implementerer feedback loops gjennom hele AI-livssyklusen fra utvi
**Tracing og logging:**
- **MLflow Traces** / **MLflow 3 GenAI**: Fanger detaljerte execution traces med inputs, outputs og alle mellomsteg for hver interaksjon. *(Verified MCP 2026-04)*
- MLflow 3 GenAI introduserer ny **Feedback/Expectation-datamodell** for strukturert lagring av human feedback
- MLflow 3 GenAI introduserer ny **Assessment-datamodell** med to typer:
- **Feedback** assessments: evaluerer faktisk output (ratings, kommentarer — "Var agentens svar bra?")
- **Expectation** assessments: definerer ønsket/korrekt output (ground truth — "Hva burde ha blitt produsert"); brukes til å bygge evalueringsdata
- Tre innsamlingskilder: utvikler (dev), domeneekspert (via Review App), sluttbruker (produksjon)
- `mlflow.log_feedback()` API for å knytte bruker-rating og kommentarer til spesifikke traces
- Ny kapabilitet: **Genie Code** for naturspråk-analyse av trace-data
- Integrert tracing for Databricks agentic applikasjoner
- **Azure Monitor & Application Insights**: Logger operational metrics, latency, error rates
- **Model Data Collector**: Automatisk innsamling av production data for ML-modeller

View file

@ -31,7 +31,7 @@ HITL-implementasjoner i Microsoft-stakken består av flere samvirkende komponent
| Plattform | Mekanisme | Bruksområde |
|-----------|-----------|-------------|
| **Power Automate** | Multistage Approvals (GA) | Strukturerte godkjenningsflyter med både AI- og manuell-godkjenning, eskalering basert på konfidensgrader |
| **Power Automate / Copilot Studio** | Multistage og AI-approvals (Preview) | Strukturerte godkjenningsflyter med AI-stage (GPT-o3 gjør Approve/Reject med begrunnelse) og manuell-stage; ny 'Human in the loop'-kobling; conditions mellom stages for dynamisk routing *(Verified MCP 2026-04)* |
| **Azure Logic Apps** | Human Approval Connectors | Pauser AI-prosesser for menneskelig validering, integreres med Microsoft Teams, Outlook, eller egne dashboards |
| **Copilot Studio** | Human Handoff Topic | Overfører samtale fra agent til menneskelig representant når AI ikke kan løse oppgaven |
| **Microsoft Agent Framework** | HITL Orchestrations | Subworkflows som pauseer agent-kjeder for menneskelig feedback/approval på agentoutput |

View file

@ -36,7 +36,7 @@ Microsoft definerer et trelagsopplæringsopplegg for Responsible AI:
**Verified:** Microsoft Learn tilbyr disse som strukturerte learning paths:
- [Embrace Responsible AI Principles and Practices](https://learn.microsoft.com/training/modules/embrace-responsible-ai-principles-practices/)
- [Apply Responsible AI Principles in Learning Environments](https://learn.microsoft.com/training/modules/apply-responsible-ai-principles/)
- [Implement a Responsible Generative AI Solution in Azure AI Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/)
- [Implement a Responsible Generative AI Solution in Microsoft Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) *(Verified MCP 2026-04 — modulnavn endret fra 'Azure AI Studio' til 'Microsoft Foundry')*
### 2. Role-Specific Training
@ -231,7 +231,7 @@ Er du offentlig sektor?
**Relevant innhold:**
- [Embrace Responsible AI Principles and Practices](https://learn.microsoft.com/training/modules/embrace-responsible-ai-principles-practices/) (9 units, 1 time)
- [AI Fluency: Explore Responsible AI](https://learn.microsoft.com/training/modules/responsible-ai/) (7 units, beginner)
- [Implement a Responsible Generative AI Solution in Azure AI Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) (9 units, intermediate)
- [Implement a Responsible Generative AI Solution in Microsoft Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) *(Verified MCP 2026-04 — modulnavn endret fra 'Azure AI Studio' til 'Microsoft Foundry')* (9 units, intermediate)
**Best practice:** Krev at alle som får tildelt Azure AI-ressurser eller M365 Copilot-lisens må fullføre minimum "Embrace Responsible AI Principles" før tilgang aktiveres.
@ -528,7 +528,7 @@ Er det custom AI (ikke bare ferdiglagde features)?
7. [Apply Responsible AI Principles in Learning Environments](https://learn.microsoft.com/training/modules/apply-responsible-ai-principles/) — Training module focused on educational contexts, applicable to organizational learning.
8. [Implement a Responsible Generative AI Solution in Azure AI Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) — Technical module on RAI implementation in Azure AI Foundry (intermediate level).
8. [Implement a Responsible Generative AI Solution in Microsoft Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) *(Verified MCP 2026-04 — modulnavn endret fra 'Azure AI Studio' til 'Microsoft Foundry')* — Technical module on RAI implementation in Azure AI Foundry (intermediate level).
9. [Scale AI in Your Organization](https://learn.microsoft.com/training/modules/scale-ai/) — Module covering organizational roles, responsibilities, and empowerment through AI.

View file

@ -726,8 +726,9 @@ Return on investment: Transparency er billigere enn cleanup. Skal vi prioritere
https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf
(Status: Baseline — Impact Assessment framework, June 2022)
8. **ISO/IEC 42001:2023 overview**
8. **ISO/IEC 42001:2023 overview** *(Verified MCP 2026-04)*
https://learn.microsoft.com/en-us/compliance/regulatory/offering-iso-42001
Microsoft-sertifisering dekker nå: M365 Copilot, Copilot Studio, Microsoft Foundry, Security Copilot, GitHub Copilot og Dragon Copilot (utvidet fra kun M365 Copilot).
(Status: Verified 2026-02 — AI management system standard)
9. **Govern AI (Cloud Adoption Framework)**