Appearance
Governance Frameworks Overview
What is AI Governance?
AI Governance is the set of policies, processes, and controls that ensure AI systems are developed, deployed, and operated responsibly, ethically, and in compliance with applicable laws and standards.
SignalBreak helps organisations meet governance requirements by:
- Tracking AI dependencies across workflows and providers
- Monitoring provider risks through continuous signal detection
- Generating compliance evidence via automated evidence packs
- Mapping to frameworks including ISO 42001, NIST AI RMF, and EU AI Act
- Quantifying business impact to support risk-based decision making
Supported Frameworks
SignalBreak provides direct support for 3 primary frameworks and tracks 13+ additional regulations:
Primary Frameworks (with Evidence Pack integration)
| Framework | Type | Jurisdiction | Status | SignalBreak Support |
|---|---|---|---|---|
| ISO/IEC 42001:2023 | International Standard | Global | Published | Full compliance mapping, evidence pack integration |
| NIST AI Risk Management Framework (AI RMF) | Voluntary Framework | US (global adoption) | Published | Risk assessment alignment, control mapping |
| EU Artificial Intelligence Act | Regulation | European Union | Enacted (Aug 2024) | Risk classification, GPAI compliance tracking |
Additional Tracked Regulations
Signal
Break's governance module also tracks these regulations:
| Regulation | Jurisdiction | Status | Effective Date |
|---|---|---|---|
| California SB 1047 (Safe AI Models) | US/California | Proposed | TBD |
| Colorado AI Act | US/Colorado | Enacted | May 2024 |
| NYC Local Law 144 (Employment AI) | US/New York City | Enacted | July 2023 |
| Illinois BIPA (Biometric Privacy) | US/Illinois | Enacted | October 2008 |
| China AI Algorithm Regulation | China | Enacted | March 2022 |
| China Generative AI Measures | China | Enacted | August 2023 |
| Canada AIDA (Artificial Intelligence & Data Act) | Canada | Proposed | TBD |
| Brazilian AI Framework Bill | Brazil | Proposed | TBD |
| Japanese AI Utilization Guidelines | Japan | Voluntary | August 2019 |
| ISO/IEC 23894:2023 (AI Risk Management) | Global | Published | N/A |
| NIST AI 600-1 (Adversarial ML) | US | Published | N/A |
Access Full List: Dashboard → Governance → Frameworks
How SignalBreak Supports Governance
1. Continuous Monitoring
Traditional governance relies on point-in-time assessments. SignalBreak provides continuous governance by monitoring AI providers 24/7:
| Traditional Approach | SignalBreak Approach |
|---|---|
| Quarterly AI inventory updates | Real-time workflow tracking |
| Manual vendor risk assessments | Automated provider health monitoring |
| Annual compliance audits | Continuous evidence generation |
| Static risk registers | Dynamic risk scoring (0-100 scale) |
| Spreadsheet-based tracking | API-driven compliance data |
Result: Your governance posture is always current, not 3-6 months out of date.
2. Evidence Pack Generation
SignalBreak automatically generates consulting-grade evidence packs (PDF reports) that:
- Demonstrate compliance with ISO 42001, NIST AI RMF, and EU AI Act
- Provide auditable evidence for internal/external audits
- Quantify business impact of AI risks
- Track compliance maturity over time
- Include 90-day remediation roadmaps
Generated Evidence Includes:
- AI system inventory (ISO 42001 Clause 6.2.2)
- Risk assessment methodology (ISO 42001 Clause 6.1.3)
- Third-party monitoring (ISO 42001 Clause 8.3)
- Impact assessments (ISO 42001 Clause 8.4)
- Continuous monitoring (ISO 42001 Clause 9.1)
See: Evidence Packs Guide
3. Risk-Based Decision Making
All three primary frameworks (ISO 42001, NIST AI RMF, EU AI Act) require risk-based approaches. SignalBreak provides:
Risk Scoring (0-100 scale)
- Weighted calculation based on scenario impacts
- RAG status (Red/Amber/Green) thresholds
- Historical trend tracking
- Projected improvement scores
Provider Concentration Analysis
- Percentage of workflows per provider
- Single-point-of-failure identification
- Concentration risk alerts (>35% threshold)
Scenario Impact Assessment
- Business continuity impact quantification
- Estimated downtime (hours/days)
- Cost range estimates (£)
- Likelihood × Impact matrices
4. Framework-Specific Compliance
Each framework has unique requirements. SignalBreak provides tailored compliance support:
ISO 42001: AI Management System
Focus: Systematic management of AI risks across lifecycle
SignalBreak provides:
- AI system inventory (automated workflow tracking)
- Risk assessment process (transparent, documented methodology)
- Third-party relationship monitoring (provider health & signals)
- Impact assessment (business impact quantification)
- Monitoring & measurement (continuous signal detection)
- Evidence for internal audits (evidence packs)
See: ISO 42001 Guide
NIST AI RMF: Risk Management Framework
Focus: Identify, assess, and manage AI risks
SignalBreak provides:
- Govern: Policy-driven workflow classification (criticality levels)
- Map: AI risk domain classification (MIT taxonomy integration)
- Measure: Quantitative risk scoring + provider metrics
- Manage: Mitigation recommendations + 90-day roadmaps
See: NIST AI RMF Guide
EU AI Act: Regulation
Focus: Risk-based regulatory compliance for AI systems
SignalBreak provides:
- Risk classification (Unacceptable/High/Limited/Minimal)
- High-risk system identification (based on use case + impact)
- General Purpose AI (GPAI) provider compliance tracking
- Conformity assessment support (evidence generation)
- Post-market monitoring (continuous provider monitoring)
See: EU AI Act Guide
Getting Started with Governance
Step 1: Register Your AI Workflows
Before SignalBreak can provide governance support, you need an inventory of AI systems:
- Navigate to Workflows (Dashboard → AI Workflows)
- Add your workflows (minimum 3-5 critical workflows recommended)
- Configure provider bindings (primary + fallback)
- Set criticality levels (Critical, High, Medium, Low)
- Assign owners (for accountability)
Why this matters: Your workflow inventory is the foundation for all governance frameworks. ISO 42001 requires it (Clause 6.2.2), NIST AI RMF depends on it (Map function), and EU AI Act mandates it (risk classification).
Step 2: Execute Scenarios
SignalBreak calculates risk based on scenario impacts—what would happen if providers fail:
- Navigate to Scenarios (Dashboard → Scenarios)
- Create scenarios modelling potential disruptions
- Example: "OpenAI GPT-4 Outage", "Anthropic Rate Limiting"
- Execute scenarios to calculate workflow impacts
- Review risk score (Dashboard shows overall 0-100 score)
Why this matters: Risk assessment is mandatory in all three frameworks. Without executed scenarios, your risk score remains 0 and you have no compliance evidence.
Step 3: Generate Evidence Pack
Once you have workflows and scenarios, generate your first evidence pack:
- Navigate to Governance (Dashboard → Governance → Evidence Pack)
- Click "Generate Evidence Pack"
- Wait 30-60 seconds (generates consulting-grade PDF)
- Download PDF (typical size: 12-16 pages)
What's included:
- Executive summary
- Risk scorecard (current + trajectory)
- Provider concentration analysis
- Signal analysis (recent provider changes)
- Findings & recommendations (prioritised)
- ISO 42001 compliance mapping
- EU AI Act compliance status
- 90-day improvement roadmap
See: Evidence Packs Guide
Step 4: Review Compliance Gaps
The evidence pack identifies gaps in your governance maturity:
Common gaps for new users:
- Insufficient workflow coverage (need >80% of AI systems tracked)
- No fallback providers configured (increases risk scores)
- Missing workflow owners (reduces accountability)
- No risk treatment plans (required by ISO 42001 Clause 8.2)
Remediation: Follow the 90-day roadmap in your evidence pack to address gaps systematically.
Step 5: Establish Continuous Monitoring
Set up ongoing governance:
- Schedule monthly evidence packs (trend tracking)
- Configure signal alerts (Dashboard → Providers → Health)
- Assign workflow owners (notifications for relevant signals)
- Review risk score quarterly (board-level reporting)
- Update scenarios as business changes (M&A, new workflows, etc.)
Compliance Maturity Levels
SignalBreak assesses your governance maturity across multiple dimensions:
| Maturity Level | Characteristics | Evidence Pack Score | Framework Readiness |
|---|---|---|---|
| 1. Ad-Hoc | No systematic tracking, reactive only | 0-30 | Not audit-ready |
| 2. Developing | Basic inventory, manual processes | 31-50 | Partial evidence |
| 3. Defined | Documented processes, some automation | 51-70 | Most controls in place |
| 4. Managed | Systematic monitoring, metrics-driven | 71-85 | Audit-ready |
| 5. Optimising | Continuous improvement, predictive | 86-100 | Best-in-class |
Target for Compliance:
- ISO 42001 certification: Level 4 (Managed) minimum
- NIST AI RMF adoption: Level 3 (Defined) minimum
- EU AI Act high-risk systems: Level 4 (Managed) minimum
Your Current Level: Check your latest evidence pack's "Decision Readiness Score"
Framework Comparison
Choose the right framework(s) for your organisation:
| Framework | Best For | Mandatory? | Certification Available? | Annual Cost (Estimate) |
|---|---|---|---|---|
| ISO 42001 | Organisations seeking third-party certification, global vendors | No | Yes (accredited bodies) | £15k-50k (cert + annual audits) |
| NIST AI RMF | US organisations, voluntary risk management, government contractors | No (some sectors) | No (self-assessment) | £0 (voluntary) |
| EU AI Act | Organisations deploying AI in EU, high-risk systems | Yes (for covered systems) | Via notified bodies (high-risk only) | Variable (depends on risk level) |
Can I use multiple frameworks?
Yes—and it's recommended. The frameworks are complementary:
- ISO 42001: Management system structure (HOW you govern AI)
- NIST AI RMF: Risk assessment methodology (HOW you assess risks)
- EU AI Act: Regulatory compliance (WHAT you must comply with)
SignalBreak supports all three simultaneously. Your evidence pack includes mappings for each.
Common Questions
Do I need to implement all three frameworks?
No. Start with one based on your needs:
- If you're in the EU with high-risk AI: EU AI Act is mandatory → add ISO 42001 for structure
- If you're seeking certification: ISO 42001 is the only certifiable option
- If you want voluntary best practices: NIST AI RMF is free and widely adopted
Can SignalBreak replace my governance consultant?
No. SignalBreak provides:
- ✅ Automated data collection & monitoring
- ✅ Evidence generation for audits
- ✅ Risk quantification & tracking
- ✅ Compliance gap identification
You still need expert judgment for:
- ❌ Policy development
- ❌ Audit preparation & strategy
- ❌ Legal interpretation
- ❌ Stakeholder engagement
Best Practice: Use SignalBreak to reduce consultant hours by 40-60% (data gathering, evidence generation, monitoring). Use consultants for high-value advisory.
How often should I generate evidence packs?
Minimum:
- Monthly (for continuous governance)
- Quarterly (for board reporting)
- Before audits (ISO 42001, EU AI Act conformity assessments)
- After major changes (new workflows, provider changes, incidents)
Best Practice: Monthly generation with quarterly deep reviews.
Can I share evidence packs with auditors?
Yes. Evidence packs are designed for external sharing:
- Professional formatting (consulting-grade quality)
- Methodology transparency (auditable scoring)
- Source attribution (MIT Risk Repository, AIID, etc.)
- Classification marking (CONFIDENTIAL by default, customisable)
Tip: Generate a pack 1-2 weeks before your audit so data is current.
Do evidence packs expire?
Data freshness matters for governance. Evidence packs include:
- Data as of: Date data was collected
- Report period: Time range covered (typically previous month)
- Next assessment: Recommended date for next pack
Recommendation: Evidence >90 days old should not be used for audit evidence. Generate fresh packs quarterly at minimum.
Next Steps
- Choose your framework(s) based on your regulatory requirements and business needs
- Read framework-specific guides:
- Generate your first evidence pack:
- Understand risk scoring: