Skip to content

Governance Frameworks Overview

What is AI Governance?

AI Governance is the set of policies, processes, and controls that ensure AI systems are developed, deployed, and operated responsibly, ethically, and in compliance with applicable laws and standards.

SignalBreak helps organisations meet governance requirements by:

  1. Tracking AI dependencies across workflows and providers
  2. Monitoring provider risks through continuous signal detection
  3. Generating compliance evidence via automated evidence packs
  4. Mapping to frameworks including ISO 42001, NIST AI RMF, and EU AI Act
  5. Quantifying business impact to support risk-based decision making

Supported Frameworks

SignalBreak provides direct support for 3 primary frameworks and tracks 13+ additional regulations:

Primary Frameworks (with Evidence Pack integration)

FrameworkTypeJurisdictionStatusSignalBreak Support
ISO/IEC 42001:2023International StandardGlobalPublishedFull compliance mapping, evidence pack integration
NIST AI Risk Management Framework (AI RMF)Voluntary FrameworkUS (global adoption)PublishedRisk assessment alignment, control mapping
EU Artificial Intelligence ActRegulationEuropean UnionEnacted (Aug 2024)Risk classification, GPAI compliance tracking

Additional Tracked Regulations

Signal

Break's governance module also tracks these regulations:

RegulationJurisdictionStatusEffective Date
California SB 1047 (Safe AI Models)US/CaliforniaProposedTBD
Colorado AI ActUS/ColoradoEnactedMay 2024
NYC Local Law 144 (Employment AI)US/New York CityEnactedJuly 2023
Illinois BIPA (Biometric Privacy)US/IllinoisEnactedOctober 2008
China AI Algorithm RegulationChinaEnactedMarch 2022
China Generative AI MeasuresChinaEnactedAugust 2023
Canada AIDA (Artificial Intelligence & Data Act)CanadaProposedTBD
Brazilian AI Framework BillBrazilProposedTBD
Japanese AI Utilization GuidelinesJapanVoluntaryAugust 2019
ISO/IEC 23894:2023 (AI Risk Management)GlobalPublishedN/A
NIST AI 600-1 (Adversarial ML)USPublishedN/A

Access Full List: Dashboard → Governance → Frameworks


How SignalBreak Supports Governance

1. Continuous Monitoring

Traditional governance relies on point-in-time assessments. SignalBreak provides continuous governance by monitoring AI providers 24/7:

Traditional ApproachSignalBreak Approach
Quarterly AI inventory updatesReal-time workflow tracking
Manual vendor risk assessmentsAutomated provider health monitoring
Annual compliance auditsContinuous evidence generation
Static risk registersDynamic risk scoring (0-100 scale)
Spreadsheet-based trackingAPI-driven compliance data

Result: Your governance posture is always current, not 3-6 months out of date.

2. Evidence Pack Generation

SignalBreak automatically generates consulting-grade evidence packs (PDF reports) that:

  • Demonstrate compliance with ISO 42001, NIST AI RMF, and EU AI Act
  • Provide auditable evidence for internal/external audits
  • Quantify business impact of AI risks
  • Track compliance maturity over time
  • Include 90-day remediation roadmaps

Generated Evidence Includes:

  • AI system inventory (ISO 42001 Clause 6.2.2)
  • Risk assessment methodology (ISO 42001 Clause 6.1.3)
  • Third-party monitoring (ISO 42001 Clause 8.3)
  • Impact assessments (ISO 42001 Clause 8.4)
  • Continuous monitoring (ISO 42001 Clause 9.1)

See: Evidence Packs Guide

3. Risk-Based Decision Making

All three primary frameworks (ISO 42001, NIST AI RMF, EU AI Act) require risk-based approaches. SignalBreak provides:

Risk Scoring (0-100 scale)

  • Weighted calculation based on scenario impacts
  • RAG status (Red/Amber/Green) thresholds
  • Historical trend tracking
  • Projected improvement scores

Provider Concentration Analysis

  • Percentage of workflows per provider
  • Single-point-of-failure identification
  • Concentration risk alerts (>35% threshold)

Scenario Impact Assessment

  • Business continuity impact quantification
  • Estimated downtime (hours/days)
  • Cost range estimates (£)
  • Likelihood × Impact matrices

See: Risk Scoring Methodology

4. Framework-Specific Compliance

Each framework has unique requirements. SignalBreak provides tailored compliance support:

ISO 42001: AI Management System

Focus: Systematic management of AI risks across lifecycle

SignalBreak provides:

  • AI system inventory (automated workflow tracking)
  • Risk assessment process (transparent, documented methodology)
  • Third-party relationship monitoring (provider health & signals)
  • Impact assessment (business impact quantification)
  • Monitoring & measurement (continuous signal detection)
  • Evidence for internal audits (evidence packs)

See: ISO 42001 Guide

NIST AI RMF: Risk Management Framework

Focus: Identify, assess, and manage AI risks

SignalBreak provides:

  • Govern: Policy-driven workflow classification (criticality levels)
  • Map: AI risk domain classification (MIT taxonomy integration)
  • Measure: Quantitative risk scoring + provider metrics
  • Manage: Mitigation recommendations + 90-day roadmaps

See: NIST AI RMF Guide

EU AI Act: Regulation

Focus: Risk-based regulatory compliance for AI systems

SignalBreak provides:

  • Risk classification (Unacceptable/High/Limited/Minimal)
  • High-risk system identification (based on use case + impact)
  • General Purpose AI (GPAI) provider compliance tracking
  • Conformity assessment support (evidence generation)
  • Post-market monitoring (continuous provider monitoring)

See: EU AI Act Guide


Getting Started with Governance

Step 1: Register Your AI Workflows

Before SignalBreak can provide governance support, you need an inventory of AI systems:

  1. Navigate to Workflows (Dashboard → AI Workflows)
  2. Add your workflows (minimum 3-5 critical workflows recommended)
  3. Configure provider bindings (primary + fallback)
  4. Set criticality levels (Critical, High, Medium, Low)
  5. Assign owners (for accountability)

Why this matters: Your workflow inventory is the foundation for all governance frameworks. ISO 42001 requires it (Clause 6.2.2), NIST AI RMF depends on it (Map function), and EU AI Act mandates it (risk classification).

Step 2: Execute Scenarios

SignalBreak calculates risk based on scenario impacts—what would happen if providers fail:

  1. Navigate to Scenarios (Dashboard → Scenarios)
  2. Create scenarios modelling potential disruptions
    • Example: "OpenAI GPT-4 Outage", "Anthropic Rate Limiting"
  3. Execute scenarios to calculate workflow impacts
  4. Review risk score (Dashboard shows overall 0-100 score)

Why this matters: Risk assessment is mandatory in all three frameworks. Without executed scenarios, your risk score remains 0 and you have no compliance evidence.

Step 3: Generate Evidence Pack

Once you have workflows and scenarios, generate your first evidence pack:

  1. Navigate to Governance (Dashboard → Governance → Evidence Pack)
  2. Click "Generate Evidence Pack"
  3. Wait 30-60 seconds (generates consulting-grade PDF)
  4. Download PDF (typical size: 12-16 pages)

What's included:

  • Executive summary
  • Risk scorecard (current + trajectory)
  • Provider concentration analysis
  • Signal analysis (recent provider changes)
  • Findings & recommendations (prioritised)
  • ISO 42001 compliance mapping
  • EU AI Act compliance status
  • 90-day improvement roadmap

See: Evidence Packs Guide

Step 4: Review Compliance Gaps

The evidence pack identifies gaps in your governance maturity:

Common gaps for new users:

  • Insufficient workflow coverage (need >80% of AI systems tracked)
  • No fallback providers configured (increases risk scores)
  • Missing workflow owners (reduces accountability)
  • No risk treatment plans (required by ISO 42001 Clause 8.2)

Remediation: Follow the 90-day roadmap in your evidence pack to address gaps systematically.

Step 5: Establish Continuous Monitoring

Set up ongoing governance:

  1. Schedule monthly evidence packs (trend tracking)
  2. Configure signal alerts (Dashboard → Providers → Health)
  3. Assign workflow owners (notifications for relevant signals)
  4. Review risk score quarterly (board-level reporting)
  5. Update scenarios as business changes (M&A, new workflows, etc.)

Compliance Maturity Levels

SignalBreak assesses your governance maturity across multiple dimensions:

Maturity LevelCharacteristicsEvidence Pack ScoreFramework Readiness
1. Ad-HocNo systematic tracking, reactive only0-30Not audit-ready
2. DevelopingBasic inventory, manual processes31-50Partial evidence
3. DefinedDocumented processes, some automation51-70Most controls in place
4. ManagedSystematic monitoring, metrics-driven71-85Audit-ready
5. OptimisingContinuous improvement, predictive86-100Best-in-class

Target for Compliance:

  • ISO 42001 certification: Level 4 (Managed) minimum
  • NIST AI RMF adoption: Level 3 (Defined) minimum
  • EU AI Act high-risk systems: Level 4 (Managed) minimum

Your Current Level: Check your latest evidence pack's "Decision Readiness Score"


Framework Comparison

Choose the right framework(s) for your organisation:

FrameworkBest ForMandatory?Certification Available?Annual Cost (Estimate)
ISO 42001Organisations seeking third-party certification, global vendorsNoYes (accredited bodies)£15k-50k (cert + annual audits)
NIST AI RMFUS organisations, voluntary risk management, government contractorsNo (some sectors)No (self-assessment)£0 (voluntary)
EU AI ActOrganisations deploying AI in EU, high-risk systemsYes (for covered systems)Via notified bodies (high-risk only)Variable (depends on risk level)

Can I use multiple frameworks?

Yes—and it's recommended. The frameworks are complementary:

  • ISO 42001: Management system structure (HOW you govern AI)
  • NIST AI RMF: Risk assessment methodology (HOW you assess risks)
  • EU AI Act: Regulatory compliance (WHAT you must comply with)

SignalBreak supports all three simultaneously. Your evidence pack includes mappings for each.


Common Questions

Do I need to implement all three frameworks?

No. Start with one based on your needs:

  • If you're in the EU with high-risk AI: EU AI Act is mandatory → add ISO 42001 for structure
  • If you're seeking certification: ISO 42001 is the only certifiable option
  • If you want voluntary best practices: NIST AI RMF is free and widely adopted

Can SignalBreak replace my governance consultant?

No. SignalBreak provides:

  • ✅ Automated data collection & monitoring
  • ✅ Evidence generation for audits
  • ✅ Risk quantification & tracking
  • ✅ Compliance gap identification

You still need expert judgment for:

  • ❌ Policy development
  • ❌ Audit preparation & strategy
  • ❌ Legal interpretation
  • ❌ Stakeholder engagement

Best Practice: Use SignalBreak to reduce consultant hours by 40-60% (data gathering, evidence generation, monitoring). Use consultants for high-value advisory.

How often should I generate evidence packs?

Minimum:

  • Monthly (for continuous governance)
  • Quarterly (for board reporting)
  • Before audits (ISO 42001, EU AI Act conformity assessments)
  • After major changes (new workflows, provider changes, incidents)

Best Practice: Monthly generation with quarterly deep reviews.

Can I share evidence packs with auditors?

Yes. Evidence packs are designed for external sharing:

  • Professional formatting (consulting-grade quality)
  • Methodology transparency (auditable scoring)
  • Source attribution (MIT Risk Repository, AIID, etc.)
  • Classification marking (CONFIDENTIAL by default, customisable)

Tip: Generate a pack 1-2 weeks before your audit so data is current.

Do evidence packs expire?

Data freshness matters for governance. Evidence packs include:

  • Data as of: Date data was collected
  • Report period: Time range covered (typically previous month)
  • Next assessment: Recommended date for next pack

Recommendation: Evidence >90 days old should not be used for audit evidence. Generate fresh packs quarterly at minimum.


Next Steps

  1. Choose your framework(s) based on your regulatory requirements and business needs
  2. Read framework-specific guides:
  3. Generate your first evidence pack:
  4. Understand risk scoring: