From 2dc825b3cb24d0f22a2965bef63dbf63b05082e3 Mon Sep 17 00:00:00 2001 From: Kjell Tore Guttormsen Date: Thu, 9 Apr 2026 22:43:12 +0200 Subject: [PATCH] =?UTF-8?q?docs(architect):=20KB=20follow-up=20=E2=80=94?= =?UTF-8?q?=20batch=203=20content=20updates?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Additional factual updates from batch 3 research: - responsible-ai-training-awareness.md: module renamed "Azure AI Studio" → "Microsoft Foundry" (3 occurrences) - transparency-documentation-standards.md: ISO/IEC 42001 scope expanded to include Copilot Studio, Microsoft Foundry, Security Copilot, GitHub Copilot, Dragon Copilot - ai-act-compliance-guide.md: same ISO 42001 scope expansion - human-in-the-loop-oversight.md: AI approval stages in Copilot Studio (GPT-o3 as AI approver, new Human in the loop connector) - continuous-improvement-feedback-loops.md: MLflow 3 Feedback vs Expectation assessment types, Genie Code trace analysis Co-Authored-By: Claude Sonnet 4.6 --- .../references/responsible-ai/ai-act-compliance-guide.md | 2 +- .../responsible-ai/continuous-improvement-feedback-loops.md | 6 +++++- .../responsible-ai/human-in-the-loop-oversight.md | 2 +- .../responsible-ai/responsible-ai-training-awareness.md | 6 +++--- .../responsible-ai/transparency-documentation-standards.md | 3 ++- 5 files changed, 12 insertions(+), 7 deletions(-) diff --git a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/ai-act-compliance-guide.md b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/ai-act-compliance-guide.md index 9e7c355..824c51b 100644 --- a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/ai-act-compliance-guide.md +++ b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/ai-act-compliance-guide.md @@ -70,7 +70,7 @@ Providers av høyrisiko-systemer (de som utvikler/markedsfører) må oppfylle ** | **Transparency** | Brukere skal forstå systemets kapabiliteter og begrensninger | Transparency notes, model cards | | **Human Oversight** | Mekanismer for human-in-the-loop i kritiske beslutninger | Azure Logic Apps, Power Automate approval workflows | | **Accuracy, Robustness, Security** | Høy presisjon, resiliens mot feil, cybersecurity | Azure AI Content Safety, adversarial testing (PyRIT) | -| **Quality Management System** | ISO-lignende kvalitetsstyring for hele utviklingsløpet | ISO 42001:2023 (Microsoft sertifisert for M365 Copilot) | +| **Quality Management System** | ISO-lignende kvalitetsstyring for hele utviklingsløpet | ISO 42001:2023 (Microsoft sertifisert for M365 Copilot, Copilot Studio, Microsoft Foundry, Security Copilot, GitHub Copilot, Dragon Copilot) *(Verified MCP 2026-04)* | | **Conformity Assessment** | Pre-deployment vurdering (intern eller ekstern) | Azure AI Foundry evaluation metrics, Compliance Manager | | **CE-merking** | Registrering i EU database før markedsføring | (Gjelder ikke SaaS-tjenester fra Microsoft) | | **Post-market Monitoring** | Kontinuerlig overvåking av performance i produksjon | Microsoft Defender for Cloud AI threat protection | diff --git a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/continuous-improvement-feedback-loops.md b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/continuous-improvement-feedback-loops.md index 42ebed1..b444666 100644 --- a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/continuous-improvement-feedback-loops.md +++ b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/continuous-improvement-feedback-loops.md @@ -30,8 +30,12 @@ Microsoft implementerer feedback loops gjennom hele AI-livssyklusen – fra utvi **Tracing og logging:** - **MLflow Traces** / **MLflow 3 GenAI**: Fanger detaljerte execution traces med inputs, outputs og alle mellomsteg for hver interaksjon. *(Verified MCP 2026-04)* - - MLflow 3 GenAI introduserer ny **Feedback/Expectation-datamodell** for strukturert lagring av human feedback + - MLflow 3 GenAI introduserer ny **Assessment-datamodell** med to typer: + - **Feedback** assessments: evaluerer faktisk output (ratings, kommentarer — "Var agentens svar bra?") + - **Expectation** assessments: definerer ønsket/korrekt output (ground truth — "Hva burde ha blitt produsert"); brukes til å bygge evalueringsdata + - Tre innsamlingskilder: utvikler (dev), domeneekspert (via Review App), sluttbruker (produksjon) - `mlflow.log_feedback()` API for å knytte bruker-rating og kommentarer til spesifikke traces + - Ny kapabilitet: **Genie Code** for naturspråk-analyse av trace-data - Integrert tracing for Databricks agentic applikasjoner - **Azure Monitor & Application Insights**: Logger operational metrics, latency, error rates - **Model Data Collector**: Automatisk innsamling av production data for ML-modeller diff --git a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/human-in-the-loop-oversight.md b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/human-in-the-loop-oversight.md index 6bd8574..e900873 100644 --- a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/human-in-the-loop-oversight.md +++ b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/human-in-the-loop-oversight.md @@ -31,7 +31,7 @@ HITL-implementasjoner i Microsoft-stakken består av flere samvirkende komponent | Plattform | Mekanisme | Bruksområde | |-----------|-----------|-------------| -| **Power Automate** | Multistage Approvals (GA) | Strukturerte godkjenningsflyter med både AI- og manuell-godkjenning, eskalering basert på konfidensgrader | +| **Power Automate / Copilot Studio** | Multistage og AI-approvals (Preview) | Strukturerte godkjenningsflyter med AI-stage (GPT-o3 gjør Approve/Reject med begrunnelse) og manuell-stage; ny 'Human in the loop'-kobling; conditions mellom stages for dynamisk routing *(Verified MCP 2026-04)* | | **Azure Logic Apps** | Human Approval Connectors | Pauser AI-prosesser for menneskelig validering, integreres med Microsoft Teams, Outlook, eller egne dashboards | | **Copilot Studio** | Human Handoff Topic | Overfører samtale fra agent til menneskelig representant når AI ikke kan løse oppgaven | | **Microsoft Agent Framework** | HITL Orchestrations | Subworkflows som pauseer agent-kjeder for menneskelig feedback/approval på agentoutput | diff --git a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/responsible-ai-training-awareness.md b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/responsible-ai-training-awareness.md index 06c3cb4..f8cba3a 100644 --- a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/responsible-ai-training-awareness.md +++ b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/responsible-ai-training-awareness.md @@ -36,7 +36,7 @@ Microsoft definerer et trelagsopplæringsopplegg for Responsible AI: **Verified:** Microsoft Learn tilbyr disse som strukturerte learning paths: - [Embrace Responsible AI Principles and Practices](https://learn.microsoft.com/training/modules/embrace-responsible-ai-principles-practices/) - [Apply Responsible AI Principles in Learning Environments](https://learn.microsoft.com/training/modules/apply-responsible-ai-principles/) -- [Implement a Responsible Generative AI Solution in Azure AI Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) +- [Implement a Responsible Generative AI Solution in Microsoft Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) *(Verified MCP 2026-04 — modulnavn endret fra 'Azure AI Studio' til 'Microsoft Foundry')* ### 2. Role-Specific Training @@ -231,7 +231,7 @@ Er du offentlig sektor? **Relevant innhold:** - [Embrace Responsible AI Principles and Practices](https://learn.microsoft.com/training/modules/embrace-responsible-ai-principles-practices/) (9 units, 1 time) - [AI Fluency: Explore Responsible AI](https://learn.microsoft.com/training/modules/responsible-ai/) (7 units, beginner) -- [Implement a Responsible Generative AI Solution in Azure AI Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) (9 units, intermediate) +- [Implement a Responsible Generative AI Solution in Microsoft Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) *(Verified MCP 2026-04 — modulnavn endret fra 'Azure AI Studio' til 'Microsoft Foundry')* (9 units, intermediate) **Best practice:** Krev at alle som får tildelt Azure AI-ressurser eller M365 Copilot-lisens må fullføre minimum "Embrace Responsible AI Principles" før tilgang aktiveres. @@ -528,7 +528,7 @@ Er det custom AI (ikke bare ferdiglagde features)? 7. [Apply Responsible AI Principles in Learning Environments](https://learn.microsoft.com/training/modules/apply-responsible-ai-principles/) — Training module focused on educational contexts, applicable to organizational learning. -8. [Implement a Responsible Generative AI Solution in Azure AI Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) — Technical module on RAI implementation in Azure AI Foundry (intermediate level). +8. [Implement a Responsible Generative AI Solution in Microsoft Foundry](https://learn.microsoft.com/training/modules/responsible-ai-studio/) *(Verified MCP 2026-04 — modulnavn endret fra 'Azure AI Studio' til 'Microsoft Foundry')* — Technical module on RAI implementation in Azure AI Foundry (intermediate level). 9. [Scale AI in Your Organization](https://learn.microsoft.com/training/modules/scale-ai/) — Module covering organizational roles, responsibilities, and empowerment through AI. diff --git a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/transparency-documentation-standards.md b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/transparency-documentation-standards.md index d4c57bf..489bb31 100644 --- a/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/transparency-documentation-standards.md +++ b/plugins/ms-ai-architect/skills/ms-ai-governance/references/responsible-ai/transparency-documentation-standards.md @@ -726,8 +726,9 @@ Return on investment: Transparency er billigere enn cleanup. Skal vi prioritere https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf (Status: Baseline — Impact Assessment framework, June 2022) -8. **ISO/IEC 42001:2023 overview** +8. **ISO/IEC 42001:2023 overview** *(Verified MCP 2026-04)* https://learn.microsoft.com/en-us/compliance/regulatory/offering-iso-42001 + Microsoft-sertifisering dekker nå: M365 Copilot, Copilot Studio, Microsoft Foundry, Security Copilot, GitHub Copilot og Dragon Copilot (utvidet fra kun M365 Copilot). (Status: Verified 2026-02 — AI management system standard) 9. **Govern AI (Cloud Adoption Framework)**