ISO/IEC 42001:2023 Guide
What is ISO 42001?
ISO/IEC 42001:2023 is the world's first international standard for AI Management Systems (AIMS). Published in December 2023, it provides a framework for organizations to develop, deploy, and continuously improve AI systems responsibly and ethically.
Official Name: ISO/IEC 42001:2023 - Information technology — Artificial intelligence — Management system
Developed By: ISO/IEC JTC 1/SC 42 (Artificial Intelligence subcommittee)
Publication Date: December 15, 2023
Standard Type: Certifiable management system standard (like ISO 9001, ISO 27001)
Why ISO 42001 Matters for AI Governance
1. The Only Certifiable AI Governance Standard
ISO 42001 is currently the only internationally recognised standard that organizations can achieve formal certification for in AI governance.
| Framework | Certification Available? | Auditing Body |
|---|---|---|
| ISO 42001 | ✅ Yes | Accredited certification bodies (BSI, SGS, Bureau Veritas, etc.) |
| NIST AI RMF | ❌ No | Self-assessment only |
| EU AI Act | ⚠️ Partial | Conformity assessment (high-risk systems only) |
This matters because:
- Certification provides third-party validation of your AI governance practices
- Demonstrates due diligence to customers, regulators, and investors
- May become table stakes for enterprise AI procurement (similar to ISO 27001 for security)
2. Designed Specifically for AI
Unlike generic quality or IT management standards, ISO 42001 addresses AI-specific risks:
- Bias and fairness in AI decision-making
- Explainability and transparency requirements
- Data quality for training and operation
- Third-party AI components (vendor risk management)
- Model drift and degradation over time
- Incident response for AI failures
Comparison:
| Standard | Focus | AI-Specific? |
|---|---|---|
| ISO 9001 | Quality management | ❌ Generic (any industry) |
| ISO 27001 | Information security | ❌ IT security (not AI-specific) |
| ISO 42001 | AI management system | ✅ Purpose-built for AI |
3. Harmonised with Other Standards
ISO 42001 follows the ISO Harmonized Structure (Annex SL), making it compatible with other management systems:
- ISO 9001 (Quality)
- ISO 27001 (Information Security)
- ISO 27701 (Privacy)
- ISO 37001 (Anti-Bribery)
Benefit: If you already have ISO 9001 or ISO 27001, many processes (management review, internal audit, document control) can be shared across standards, reducing certification burden.
4. Regulatory Alignment
ISO 42001 is increasingly referenced in AI regulations:
| Regulation | ISO 42001 Status |
|---|---|
| EU AI Act | Recognised as presumption of conformity for risk management (Article 40) |
| UK AI Regulation | Government encourages ISO 42001 for demonstrating compliance with AI principles |
| Singapore Model AI Governance Framework | ISO 42001 cited as implementation guidance |
Expected trend: Regulatory bodies will increasingly accept ISO 42001 certification as evidence of compliance with AI-specific legal requirements.
How SignalBreak Maps to ISO 42001
SignalBreak provides automated evidence generation for 10 key ISO 42001 clauses. Your evidence packs demonstrate compliance with:
| ISO Clause | Requirement | SignalBreak Evidence | Coverage |
|---|---|---|---|
| 4.1 | Understanding context | Provider landscape analysis showing AI ecosystem dependencies | 🟢 Full |
| 4.2 | Interested parties | Workflow owners and business context captured | 🟡 Partial |
| 6.1.3 | Risk assessment | Decision Readiness Scorecard with transparent methodology | 🟢 Full |
| 6.2.2 | AI system inventory | Workflow registry with complete AI system documentation | 🟢 Full |
| 8.2 | Risk treatment | Prioritised recommendations with risk mitigation strategies | 🟢 Full |
| 8.3 | Third-party relationships | Provider concentration analysis, continuous monitoring | 🟢 Full |
| 8.4 | Impact assessment | Business impact quantification per finding | 🟢 Full |
| 9.1 | Monitoring & measurement | Continuous signal monitoring (47 sources, 21 providers) | 🟢 Full |
| 9.2 | Internal audit | Evidence Pack generation as periodic self-assessment | 🟡 Partial |
| 10.2 | Continual improvement | Score trajectory, projected improvement from recommendations | 🟢 Full |
Coverage Summary:
- Full coverage: 8 of 10 clauses (80%)
- Partial coverage: 2 of 10 clauses (20%)
- No coverage (gaps): 0 clauses
ISO 42001 Clause Details
Clause 4.1: Understanding the Organization and Its Context
Requirement: Determine external and internal issues relevant to the AIMS, including the organization's purpose and how it affects achieving intended AI outcomes.
SignalBreak Evidence:
- Provider landscape analysis (Evidence Pack p.6)
- External context: AI provider ecosystem, industry dependencies, regulatory environment
- Internal context: Workflow dependencies, technical constraints, business criticality
Example from Evidence Pack:
"SignalBreak continuously monitors 47 sources across 21 AI providers, providing comprehensive visibility into the external AI ecosystem. Internal context is captured through workflow documentation including business criticality (4 Critical, 2 High, 2 Medium), ownership assignment, and technical dependencies."
Audit Readiness: ✅ Fully evidenced — Evidence Pack Section 3 (Provider Analysis) demonstrates both external (provider ecosystem) and internal (workflow criticality) context.
Clause 4.2: Understanding Needs and Expectations of Interested Parties
Requirement: Identify interested parties relevant to the AIMS, determine their requirements, and decide which requirements will be addressed.
SignalBreak Evidence:
- Workflow owners identified in registry (Evidence Pack Appendix B)
- Business stakeholders implied through criticality ratings
- Implicit interested parties: Customers (affected by workflow availability), compliance teams (governance requirements)
Gap: SignalBreak captures workflow owners but does not explicitly document all interested parties (e.g., external customers, regulators, partners) or their specific requirements.
Audit Readiness: 🟡 Partial — Workflow owners are documented, but full interested party registry with explicit requirements needs supplementary documentation.
Remediation for Certification:
- Create a stakeholder register listing all interested parties
- Document their specific AI-related requirements (e.g., customers require 99.5% uptime, regulators require explainability)
- Cross-reference to workflows (which workflows serve which stakeholders)
- Use SignalBreak workflow owner field as starting point
Estimated effort: 2-4 hours (small organisation), 1-2 weeks (enterprise)
Clause 6.1.3: Risk Assessment
Requirement: Define and apply an AI risk assessment process that identifies, analyses, and evaluates risks.
SignalBreak Evidence:
- Decision Readiness Scorecard (Evidence Pack p.2-3)
- Risk methodology documentation (Evidence Pack Appendix C)
- Five-dimension assessment framework:
- Workflow Coverage (25%)
- Fallback Readiness (25%)
- Provider Diversity (20%)
- Signal Response (15%)
- Governance Maturity (15%)
- Transparent scoring with weighted components
Example from Evidence Pack:
"The SignalBreak Decision Readiness Framework v2.1 provides a structured risk assessment across five dimensions. Methodology is fully documented and reproducible. Current score: 67/100 (Amber), indicating moderate AI dependency risk requiring mitigation strategies."
Audit Readiness: ✅ Fully evidenced — ISO 42001 requires a defined and applied risk assessment process. SignalBreak's framework satisfies both requirements with documented methodology and regular application (evidence pack generation).
Key audit questions SignalBreak answers:
- ❓ "How do you identify AI risks?" → Scenario-based risk modelling with provider failure analysis
- ❓ "How do you analyse risks?" → Five-dimension weighted scoring with objective metrics
- ❓ "How do you evaluate risks?" → RAG status (Red/Amber/Green) with defined thresholds
Clause 6.2.2: AI System Inventory
Requirement: Establish and maintain an inventory of AI systems within the scope of the AIMS.
SignalBreak Evidence:
- Workflow registry with complete inventory (Evidence Pack Appendix B)
- Each workflow documented with:
- Unique ID
- Workflow name
- AI capability type (Text Generation, Image Analysis, etc.)
- Provider bindings (primary + fallbacks)
- Criticality level (Critical, High, Medium, Low)
- Assigned owner
- Dependency mapping
Example from Evidence Pack:
"SignalBreak maintains a comprehensive workflow registry documenting all AI systems in scope. The current inventory includes 8 AI workflows across 7 providers, with full dependency mapping, ownership assignment, and criticality classification."
Audit Readiness: ✅ Fully evidenced — Workflow registry satisfies ISO 42001 Clause 6.2.2 inventory requirements. Each workflow record includes mandatory fields for certification.
Best Practice: Ensure 100% of production AI systems are registered in SignalBreak. Unregistered workflows create inventory gaps that auditors will flag as non-conformities.
Verification steps for audit:
- Cross-reference workflow registry against production deployment manifests
- Confirm no "shadow AI" (undocumented AI usage)
- Verify all workflows have assigned owners (no orphaned systems)
Clause 8.2: AI Risk Treatment
Requirement: Define and apply an AI risk treatment process to select appropriate risk treatment options and determine controls.
SignalBreak Evidence:
- Prioritised recommendations (Evidence Pack p.5)
- Risk mitigation strategies per finding
- Owner assignment and timelines
- Success criteria for risk reduction
- Projected score improvement from recommended actions
Example from Evidence Pack:
"Each Evidence Pack includes prioritised recommendations with explicit risk reduction targets. Recommendations include owner assignment, timeline, success criteria, and estimated score improvement. Current recommendations project +18 point improvement if implemented within stated timelines."
Audit Readiness: ✅ Fully evidenced — Evidence Pack recommendations demonstrate risk treatment process with:
- Identified controls: Specific actions (e.g., "Add fallback provider to Customer Support Summarisation workflow")
- Treatment options: Accept, mitigate, transfer, or avoid (implied through criticality-based prioritisation)
- Implementation plan: Timeline, owner, success criteria
Key audit questions SignalBreak answers:
- ❓ "How do you select risk treatments?" → Prioritised by business impact and risk score contribution
- ❓ "Who is responsible for risk treatment?" → Owners assigned in recommendations
- ❓ "How do you verify effectiveness?" → Score trajectory tracking shows improvement from implemented recommendations
Clause 8.3: Third-Party Relationships
Requirement: Determine and apply criteria for the selection, evaluation, and monitoring of third parties providing AI components.
SignalBreak Evidence:
- Provider concentration analysis (Evidence Pack p.6)
- Provider health signals tracked continuously (Evidence Pack p.7)
- Risk-based provider assessment (Tier 1-4 classification)
- Continuous monitoring (5-minute polling cycle for status pages)
Example from Evidence Pack:
"SignalBreak monitors all third-party AI providers with continuous signal detection. Provider concentration analysis identifies supply chain concentration risks, with current maximum concentration at 25% (OpenAI). Elevated risk providers are flagged with specific recommendations."
Audit Readiness: ✅ Fully evidenced — SignalBreak demonstrates third-party management across:
- Selection criteria: Provider tiers (Tier 1 = enterprise-grade, 99.9%+ SLA)
- Evaluation: Health monitoring, incident tracking, availability metrics
- Monitoring: Continuous signal detection from 47 sources across 21 providers
ISO 42001 vs Traditional Vendor Management:
| Traditional Vendor Management | ISO 42001 AI Vendor Management | SignalBreak Evidence |
|---|---|---|
| Annual vendor review | Continuous monitoring | 5-minute status polling |
| Static risk assessment | Dynamic risk scoring | Real-time provider health |
| Contract-based SLAs | Actual availability tracking | Historical uptime metrics |
| Manual incident tracking | Automated signal detection | 47 sources monitored |
Certification tip: Combine SignalBreak evidence with contractual documentation (SOW, SLA, DPA) for complete third-party evidence package.
Clause 8.4: AI System Impact Assessment
Requirement: Determine whether an AI system impact assessment is required and conduct such assessment where required.
SignalBreak Evidence:
- Business impact quantification per finding (Evidence Pack p.4)
- Downtime hours and cost estimates
- Scenario-based impact modelling (what-if analysis for provider failures)
- Likelihood × Impact matrices
Example from Evidence Pack:
"Each finding includes business impact quantification with transparent methodology. Impact estimates include downtime hours (24-72h), cost ranges (£15k-50k), and likelihood assessment (Medium: 2-4 incidents/year based on historical provider availability). Calculations based on workflow transaction volume, average value, and customer sensitivity factors."
Audit Readiness: ✅ Fully evidenced — Impact assessments satisfy ISO 42001 requirements for:
- Determining necessity: Critical workflows automatically flagged for impact assessment
- Conducting assessment: Quantified business impact (downtime, cost, customer satisfaction)
- Documentation: Evidence Pack findings section provides auditable impact records
ISO 42001 Impact Assessment Triggers:
| Trigger | SignalBreak Detection |
|---|---|
| High-risk AI system | Workflows with "Critical" or "High" criticality |
| New AI deployment | New workflow creation (recommended: generate evidence pack before production) |
| Significant changes | Provider binding changes, model upgrades (tracked in change log) |
| Incidents | Provider outages, model degradation (signals detected automatically) |
Clause 9.1: Monitoring, Measurement, Analysis and Evaluation
Requirement: Determine what needs to be monitored and measured, methods for monitoring and measurement, when monitoring and measurement shall be performed, and when results shall be analysed and evaluated.
SignalBreak Evidence:
- Continuous provider monitoring (47 sources, 21 providers)
- Signal detection with 7-day reporting cycle
- Trend analysis with historical comparison
- Defined metrics: Decision Readiness Score (0-100), provider availability %, incident count
Example from Evidence Pack:
"SignalBreak provides continuous monitoring across 47 provider sources covering 21 AI providers. Signals are detected, classified, and correlated to customer workflows automatically. Evidence Packs are generated on configurable cadence (recommended: monthly) with trend analysis comparing to previous assessment periods."
Audit Readiness: ✅ Fully evidenced — SignalBreak demonstrates systematic monitoring:
| ISO 42001 Question | SignalBreak Answer |
|---|---|
| What to monitor? | AI provider health, workflow availability, risk score |
| Methods for monitoring? | Statuspage.io polling, signal classification, risk calculation |
| When to monitor? | Continuous (5-minute polling), periodic (monthly evidence packs) |
| When to analyse? | Real-time (signal detection), monthly (evidence pack generation) |
Monitoring Cadence Recommendations:
| Organisation Size | Evidence Pack Frequency | Rationale |
|---|---|---|
| Startup (<50 employees) | Quarterly | Lower AI complexity, limited resources |
| SME (50-500 employees) | Monthly | Growing AI footprint, board reporting |
| Enterprise (500+ employees) | Monthly + on-demand | High AI dependency, regulatory requirements |
| Regulated (FS, Healthcare) | Monthly minimum | Compliance evidence, audit preparation |
Clause 9.2: Internal Audit
Requirement: Conduct internal audits at planned intervals to provide information on whether the AIMS conforms to requirements and is effectively implemented.
SignalBreak Evidence:
- Evidence Pack generation as periodic self-assessment
- Structured methodology ensures consistent evaluation
- Historical comparison shows compliance trends
Gap: SignalBreak provides assessment evidence but does not replace formal internal audit procedures. ISO 42001 requires:
- Audit plan (scope, schedule, audit criteria)
- Competent auditors (internal or external)
- Audit reports (findings, corrective actions)
- Management review of audit results
Audit Readiness: 🟡 Partial — Evidence Pack supports audit activities but is not itself an audit.
Remediation for Certification:
Establish formal audit schedule
- Example: Quarterly internal audits covering all ISO 42001 clauses
- Rotate focus areas (Q1: Risk Assessment, Q2: Third-Party, Q3: Monitoring, Q4: Improvement)
Use Evidence Pack as audit input
- Provide most recent Evidence Pack to internal auditor before audit
- Auditor uses SignalBreak data as objective evidence
- Auditor verifies processes beyond SignalBreak scope (policies, training, management review)
Document audit procedure
- Create "Internal Audit Procedure for AI Management System"
- Reference SignalBreak Evidence Pack as key input (but not sole evidence)
- Define auditor competence requirements
Estimated effort: 1-2 days to create audit procedure, 2-4 hours per audit execution
Tip: Many organisations use external consultants for internal audits (especially first-time ISO 42001). SignalBreak evidence packs reduce consultant hours by 40-60% (data gathering already done).
Clause 10.2: Continual Improvement
Requirement: Continually improve the suitability, adequacy, and effectiveness of the AIMS.
SignalBreak Evidence:
- Score trajectory showing improvement over time (Evidence Pack p.2)
- Recommendations with projected improvement
- Historical comparison to previous assessments
- Trend analysis (improving, stable, or declining)
Example from Evidence Pack:
"SignalBreak tracks Decision Readiness Score over time, providing visibility into governance improvement trajectory. Current score: 67/100 (up from 52 last month, +15 points). Each Evidence Pack includes recommendations with projected score improvement. Current recommendations estimate +18 point improvement if implemented within stated timelines."
Audit Readiness: ✅ Fully evidenced — Continual improvement demonstrated through:
- Measurement: Score tracking over time
- Analysis: Gap analysis (current vs. target state)
- Action: Prioritised recommendations
- Verification: Next evidence pack shows whether recommendations were implemented
Continual Improvement Cycle with SignalBreak:
Month 1: Generate Evidence Pack → Score: 52/100 (Red)
↓
Month 1-2: Implement top 3 recommendations
1. Add fallback to critical workflows (+8 points)
2. Reduce OpenAI concentration (+5 points)
3. Assign workflow owners (+2 points)
↓
Month 3: Generate Evidence Pack → Score: 67/100 (Amber)
↓
Month 3-4: Implement next 3 recommendations
4. Enable human-in-loop for high-risk workflows (+3 points)
5. Document incident response procedures (+4 points)
6. Expand monitoring coverage (+2 points)
↓
Month 5: Generate Evidence Pack → Score: 76/100 (Amber → Green threshold)ISO 42001 Audit Evidence: Show auditor 3-6 months of evidence packs demonstrating score improvement from implemented recommendations. This proves systematic continual improvement process.
Scoring Methodology (ISO 42001 Perspective)
How SignalBreak Calculates Your ISO 42001 Readiness
SignalBreak does not assign a specific "ISO 42001 compliance score." Instead, it provides:
- Decision Readiness Score (0-100): Overall AI risk governance maturity
- Clause-by-clause evidence mapping: Shows which ISO 42001 requirements are satisfied
- Gap analysis: Identifies missing evidence for certification
Why no ISO 42001 score? ISO 42001 is a binary certification (you're either certified or not). There's no "75% compliant" status. SignalBreak instead focuses on evidence readiness — do you have the data and documentation an auditor needs?
Evidence Pack as Certification Input
Your evidence pack provides objective evidence for 8 of 10 key clauses. For certification, you'll also need:
| ISO 42001 Requirement | SignalBreak Evidence | Additional Documentation Needed |
|---|---|---|
| Context of organisation (4.1) | ✅ Provider landscape | Organisational strategic plan, SWOT analysis |
| Interested parties (4.2) | 🟡 Workflow owners | Full stakeholder register with requirements |
| Leadership (5.x) | ❌ Not in scope | Top management commitment, resource allocation |
| Planning (6.x) | ✅ Risk assessment, inventory | AI objectives, KPIs, risk treatment plan |
| Support (7.x) | ❌ Not in scope | Competence records, training logs, documented procedures |
| Operation (8.x) | ✅ Risk treatment, third-party, impact | Operational procedures, change management |
| Performance (9.x) | ✅ Monitoring, 🟡 audit | Formal audit plan, management review records |
| Improvement (10.x) | ✅ Continual improvement | Corrective action records, lessons learned |
SignalBreak provides ~60% of certification evidence (the data-intensive clauses). You'll need supplementary policy and procedural documentation for the remaining 40%.
Control Categories and What They Assess
ISO 42001 organises requirements into 10 main sections (plus normative references and terms):
Section 4: Context of the Organisation
What it assesses: Does your organisation understand its internal and external context for AI use?
Key controls:
- External issues (regulatory environment, market conditions, technology trends)
- Internal issues (capabilities, resources, culture)
- Interested parties (stakeholders affected by AI systems)
- AIMS scope definition
SignalBreak evidence: ✅ External context via provider landscape 🟡 Internal context via workflow criticality 🟡 Interested parties via workflow owners (needs expansion)
Section 5: Leadership
What it assesses: Does top management demonstrate commitment to the AIMS?
Key controls:
- Management commitment and policy
- Organisational roles and responsibilities
- Communication of AIMS importance
SignalBreak evidence: ❌ Not in scope — requires executive-level documentation (policy statements, role definitions)
Recommendation: SignalBreak evidence packs can support management review meetings (provide data for decision-making), but you must document management's commitment separately.
Section 6: Planning
What it assesses: How does the organisation plan to address AI risks and opportunities?
Key controls:
- Risk and opportunity assessment (Clause 6.1.3)
- AI objectives and planning to achieve them
- AI system inventory (Clause 6.2.2)
SignalBreak evidence: ✅ Risk assessment (Decision Readiness Framework) ✅ AI inventory (Workflow registry) ❌ AI objectives (organisational goals like "reduce AI-related downtime by 50%")
Recommendation: Use SignalBreak score as basis for AI objectives. Example: "Achieve Green status (score <30) within 12 months" or "Reduce provider concentration below 25% by Q3."
Section 7: Support
What it assesses: Does the organisation provide resources, competence, and awareness for the AIMS?
Key controls:
- Resources (people, infrastructure, budget)
- Competence (training, skills)
- Awareness (understanding of AIMS importance)
- Communication (internal and external)
- Documented information (document control)
SignalBreak evidence: ❌ Not in scope — requires HR and training documentation
Recommendation: Create AI governance training referencing SignalBreak processes (how to interpret evidence packs, how to respond to provider signals). Document training attendance for audit.
Section 8: Operation
What it assesses: How does the organisation implement and control AI-related processes?
Key controls:
- Operational planning and control
- AI risk treatment (Clause 8.2)
- Third-party relationships (Clause 8.3)
- Impact assessment (Clause 8.4)
- AI system development and use
- Human oversight
- Data management
- Information for AI system users
SignalBreak evidence: ✅ Risk treatment (recommendations with timelines) ✅ Third-party management (provider monitoring) ✅ Impact assessment (business impact quantification) ❌ Development processes, data management (needs separate documentation)
Recommendation: SignalBreak covers third-party AI (models as a service). For internally developed AI, you'll need separate documentation on development lifecycle, testing, validation, data governance.
Section 9: Performance Evaluation
What it assesses: How does the organisation monitor, measure, and improve the AIMS?
Key controls:
- Monitoring and measurement (Clause 9.1)
- Analysis and evaluation
- Internal audit (Clause 9.2)
- Management review
SignalBreak evidence: ✅ Monitoring (continuous signal detection) 🟡 Internal audit (evidence pack as audit input, not audit itself) ❌ Management review (executive-level meeting records)
Recommendation: Present evidence pack at quarterly management review meetings. Document review outcomes (decisions, resource allocation, corrective actions) separately.
Section 10: Improvement
What it assesses: How does the organisation drive continual improvement of the AIMS?
Key controls:
- Continual improvement (Clause 10.2)
- Nonconformity and corrective action
- Preventive action
SignalBreak evidence: ✅ Continual improvement (score trajectory) ❌ Nonconformity management (incident records, root cause analysis)
Recommendation: When AI-related incidents occur (e.g., provider outage affecting production), reference SignalBreak signals in incident reports. Show how SignalBreak detected the issue and what corrective actions were taken.
How to Improve Your ISO 42001 Readiness
Step 1: Achieve Green Status (Score <30)
Your Decision Readiness Score is a proxy for AI governance maturity. Auditors will ask: "How do you know your AI management system is effective?"
Evidence: Show Green status with stable or improving trend.
Actions to improve score:
- Add fallback providers to all Critical workflows (fastest path to score reduction)
- Reduce provider concentration below 25% (spreads risk)
- Assign workflow owners (demonstrates accountability)
- Enable human-in-loop for high-risk workflows (reduces estimated downtime)
See Risk Scoring Methodology for detailed improvement strategies.
Step 2: Close Clause Gaps
Address the 2 partial coverage clauses before certification audit:
Gap 1: Clause 4.2 — Interested Parties
Current state: 🟡 Workflow owners documented, but not full stakeholder registry
Remediation: Create a stakeholder register documenting:
- Customers affected by each workflow
- Internal teams (compliance, legal, IT)
- External partners (data processors, AI vendors)
- Regulators (FCA, ICO, etc. depending on industry)
- For each stakeholder: their AI-related requirements
Template:
| Stakeholder | Relationship | AI-Related Requirements | Addressed How |
|---|---|---|---|
| Enterprise customers | External | 99.5% SLA, explainability | Fallback providers, human-in-loop |
| Compliance team | Internal | Evidence for audits, risk reporting | Monthly evidence packs |
| OpenAI | Third-party | API terms compliance, data residency | Contract review, DPA |
| ICO (UK regulator) | External | GDPR compliance for AI processing | DPIA, transparency notices |
Estimated effort: 4-8 hours
Gap 2: Clause 9.2 — Internal Audit
Current state: 🟡 Evidence pack supports audits, but no formal audit procedure
Remediation: Create an Internal Audit Procedure for AI Management System covering:
- Audit schedule: Quarterly audits covering all AIMS clauses
- Audit scope: Which AI systems, processes, locations
- Auditor competence: Requirements (ISO 42001 knowledge, AI understanding)
- Audit process:
- Planning (review last evidence pack, identify focus areas)
- Execution (interviews, document review, observation)
- Reporting (findings, nonconformities, observations)
- Follow-up (corrective actions)
- SignalBreak integration: Evidence pack as key input to audit
Sample audit plan:
| Quarter | Focus Areas | SignalBreak Input |
|---|---|---|
| Q1 | Risk assessment, AI inventory | Evidence pack Section 2-3 |
| Q2 | Third-party management, monitoring | Evidence pack Section 3-4 |
| Q3 | Impact assessment, risk treatment | Evidence pack Section 2, Recommendations |
| Q4 | Continual improvement, management review | Score trajectory, trend analysis |
Estimated effort: 1-2 days to create procedure, 4-6 hours per audit execution
Step 3: Establish Management Review Process
ISO 42001 Clause 5.3 requires top management to review the AIMS at planned intervals.
Recommendation: Quarterly management review meetings covering:
AIMS performance:
- Decision Readiness Score trend (from evidence packs)
- Provider incidents and their business impact
- Compliance status (progress toward Green)
Internal audit results:
- Findings from last internal audit
- Corrective actions taken
Changes affecting AIMS:
- New AI systems deployed
- Provider changes (new contracts, terminations)
- Regulatory changes (new AI laws)
Resource needs:
- Budget for AI governance tools (SignalBreak subscription)
- Staff for AI risk management
- Training requirements
Improvement opportunities:
- Recommendations from evidence pack
- Lessons learned from incidents
Output: Management review minutes documenting decisions, actions, and resource allocation.
SignalBreak evidence pack provides agenda items 1, 2 (via audit input), and 5.
Step 4: Document Policies and Procedures
ISO 42001 requires documented information for key processes. At minimum, create:
| Document | Purpose | SignalBreak Reference |
|---|---|---|
| AI Governance Policy | Top-level commitment to responsible AI | Reference Decision Readiness Framework as risk methodology |
| AI System Lifecycle Procedure | How AI systems are developed/procured | Reference workflow registration process |
| Third-Party AI Management Procedure | Vendor selection, monitoring, offboarding | Reference provider monitoring (Evidence Pack Section 3-4) |
| AI Risk Assessment Procedure | How AI risks are identified and assessed | Reference Decision Readiness Framework methodology |
| AI Incident Response Procedure | Responding to AI failures | Reference provider signal alerts |
| Internal Audit Procedure | Audit planning, execution, reporting | Reference evidence pack as audit input |
Estimated effort: 2-4 weeks (with consultant support) or 6-8 weeks (in-house)
Step 5: Run Pre-Certification Gap Assessment
Before engaging a certification body, conduct a gap assessment (pre-audit):
Option 1: Self-assessment
- Use ISO 42001 clause checklist (available from ISO or consultants)
- For each clause, assess: ✅ Implemented, 🟡 Partial, ❌ Gap
- Prioritise gaps by certification criticality
Option 2: External consultant
- Hire ISO 42001 consultant (1-2 days, £2k-5k)
- Provide recent SignalBreak evidence pack as input
- Consultant identifies gaps and remediation plan
SignalBreak reduces gap assessment time by ~50% (data-intensive clauses already evidenced).
Evidence Requirements for Certification
What Auditors Will Request
When you engage a certification body for ISO 42001 audit, expect requests for:
1. Management System Documentation
| Document | SignalBreak Provides? | What You Need |
|---|---|---|
| AIMS scope statement | ❌ | Define which AI systems, departments, locations |
| AI governance policy | ❌ | Top management commitment, principles |
| Organisational chart | ❌ | Roles and responsibilities for AIMS |
| Risk assessment methodology | ✅ | Evidence pack Appendix C |
| AI system inventory | ✅ | Workflow registry (Evidence Pack Appendix B) |
2. Operational Evidence
| Evidence Type | SignalBreak Provides? | Example |
|---|---|---|
| Risk assessments | ✅ | Decision Readiness Scorecard (Evidence Pack p.2-3) |
| Risk treatment records | ✅ | Recommendations with timelines (Evidence Pack p.5) |
| Third-party monitoring | ✅ | Provider concentration analysis (Evidence Pack p.6) |
| Impact assessments | ✅ | Business impact quantification (Evidence Pack p.4) |
| Monitoring records | ✅ | Signal monitoring (Evidence Pack p.7) |
| Incident records | 🟡 | Provider outages detected (needs incident response documentation) |
3. Performance Evaluation Evidence
| Evidence Type | SignalBreak Provides? | Example |
|---|---|---|
| Monitoring results | ✅ | Provider availability metrics, signal counts |
| Internal audit reports | 🟡 | Evidence pack as audit input (needs formal audit report) |
| Management review records | ❌ | Minutes from quarterly reviews |
| Corrective action records | ❌ | Nonconformity tracking, root cause analysis |
4. Improvement Evidence
| Evidence Type | SignalBreak Provides? | Example |
|---|---|---|
| Continual improvement | ✅ | Score trajectory (Evidence Pack p.2) |
| Lessons learned | ❌ | Post-incident reviews, process improvements |
| Effectiveness verification | ✅ | Next evidence pack shows score improvement from recommendations |
Audit Process Overview
Stage 1: Documentation Review (Off-site)
- Certification body reviews your AIMS documentation
- Provides gap report (missing documents, incomplete processes)
- SignalBreak evidence pack demonstrates operational maturity (reduces Stage 1 findings)
Stage 2: Implementation Audit (On-site or remote)
- Auditor interviews staff, observes processes
- Verifies documented procedures are followed in practice
- Checks records (evidence packs, audit reports, management reviews)
- SignalBreak evidence pack provides objective records for interviews
Certification Decision:
- No major nonconformities → Certificate issued (3-year validity)
- Major nonconformities → Remediation required, re-audit
- Minor nonconformities → Corrective actions, verified at surveillance audit
Surveillance Audits (Annual):
- Lighter-touch audits during 3-year certificate period
- SignalBreak evidence packs demonstrate continuous compliance
Evidence Pack Best Practices for Auditors
Generate evidence pack 1-2 weeks before audit
- Ensures data is current (auditors may question old data)
- Allows time to review and prepare explanations
Provide auditor with PDF copy in advance
- Professional formatting demonstrates maturity
- Allows auditor to prepare questions
Cross-reference evidence pack in documentation
- Example: Risk Assessment Procedure → "See SignalBreak Evidence Pack Section 2 for most recent assessment"
- Shows integration between process and evidence
Show trend over time
- Provide 3-6 months of evidence packs
- Demonstrates continual improvement (score trajectory)
Explain limitations transparently
- SignalBreak covers third-party AI (not internally developed models)
- Evidence pack supports audit but doesn't replace it
- Auditors appreciate honesty about scope
Certification Timeline and Costs
Typical ISO 42001 Certification Journey
| Phase | Duration | Estimated Cost | Key Activities |
|---|---|---|---|
| 0. Preparation | 3-6 months | £5k-15k (consultant) | Gap assessment, policy development, SignalBreak onboarding |
| 1. Documentation | 2-4 months | £3k-8k (templates, consultant review) | Create AIMS procedures, integrate SignalBreak evidence |
| 2. Implementation | 3-6 months | £0 (internal effort) | Run processes, generate evidence packs monthly |
| 3. Internal audit | 1-2 months | £2k-5k (external auditor) | Pre-certification check, corrective actions |
| 4. Stage 1 audit | 1-2 weeks | £8k-15k (cert body) | Documentation review, gap report |
| 5. Stage 2 audit | 1-2 weeks | Included in cert body fee | Implementation audit, certification decision |
| Total (to certification) | 12-18 months | £18k-43k | First-time certification (no existing AIMS) |
Recertification (every 3 years): £10k-20k + annual surveillance audits (£5k-8k/year)
How SignalBreak Reduces Certification Cost
| Cost Category | Without SignalBreak | With SignalBreak | Savings |
|---|---|---|---|
| Gap assessment | 5 days @ £1k/day = £5k | 2 days @ £1k/day = £2k | £3k (SignalBreak pre-evidences 60% of clauses) |
| Ongoing monitoring | Manual quarterly reviews (40h/year @ £100/h) = £4k/year | Automated evidence packs (4h/year) = £400/year | £3.6k/year |
| Audit preparation | 20h gathering evidence @ £100/h = £2k | 4h reviewing evidence pack = £400 | £1.6k per audit |
| Consultant hours | 30 days @ £1k/day = £30k | 15 days @ £1k/day = £15k | £15k (less data gathering, more strategy) |
Total estimated savings: £23k over 3-year certification cycle
Certification Body Selection
Choose an accredited certification body for ISO 42001. Major providers:
| Certification Body | Geographic Coverage | Typical Cost (SME) | Notes |
|---|---|---|---|
| BSI (British Standards Institution) | Global | £12k-20k | Strong UK presence, AI expertise |
| SGS | Global | £10k-18k | Large portfolio, multi-site experience |
| Bureau Veritas | Global | £10k-18k | Good for manufacturing/engineering |
| NQA | UK, Europe | £8k-15k | Cost-effective, good for SMEs |
| LRQA | Global | £12k-20k | Strong in regulated industries (FS, pharma) |
Selection criteria:
- Accreditation: Verify UKAS accreditation for ISO 42001 (check UKAS website)
- Industry experience: Do they audit organisations in your sector?
- AI expertise: Do auditors understand AI/ML technologies?
- Geographic presence: Can they audit all your sites (if multi-location)?
- Cost vs. value: Cheapest isn't always best (inexperienced auditors create risk)
ISO 42001 vs Other Frameworks
Complementary Use with NIST AI RMF and EU AI Act
ISO 42001 is not mutually exclusive with other frameworks. In fact, it's designed to complement:
| Framework | Relationship to ISO 42001 | Use Both? |
|---|---|---|
| NIST AI RMF | Risk assessment methodology (feeds into ISO 42001 Clause 6.1.3) | ✅ Yes — NIST provides risk categories, ISO provides management system |
| EU AI Act | Legal compliance (ISO 42001 supports conformity assessment) | ✅ Yes — EU AI Act mandates risk management, ISO 42001 is recognised method |
| ISO 27001 | Information security (broader than AI) | ✅ Yes — Many clauses overlap (risk, monitoring, audit) |
SignalBreak supports all three simultaneously — evidence packs include:
- ISO 42001 clause mapping
- EU AI Act risk classification
- NIST AI RMF alignment (via MIT Risk Repository)
See Governance Overview for multi-framework strategy.
When to Choose ISO 42001
Choose ISO 42001 if:
- ✅ You need third-party certification for customer/regulator assurance
- ✅ You're in the EU and subject to AI Act high-risk requirements
- ✅ You already have ISO 9001/ISO 27001 (easier integration via Annex SL)
- ✅ You're a vendor selling AI products/services (certification is competitive advantage)
- ✅ You have internal AI development (ISO 42001 covers full AI lifecycle)
Don't choose ISO 42001 if:
- ❌ You only use third-party AI APIs with no customisation (NIST AI RMF may suffice)
- ❌ You're a small startup with <10 AI workflows (overhead may not justify certification)
- ❌ Your industry has sector-specific AI standards (e.g., medical devices have IEC 62304)
Hybrid approach: Many organisations use NIST AI RMF for initial risk assessment, then pursue ISO 42001 certification when they reach enterprise maturity or need regulatory evidence.
Common Questions
Do I need ISO 42001 if I'm already ISO 27001 certified?
Not necessarily, but highly recommended if AI is core to your business.
Overlap:
- Risk management processes (~40% overlap)
- Monitoring and measurement (~30% overlap)
- Audit and improvement (~50% overlap)
ISO 42001-specific additions:
- AI-specific risk categories (bias, explainability, model drift)
- Third-party AI management (API providers, pre-trained models)
- AI impact assessment (not required by ISO 27001)
- Data quality for AI (training data, data drift)
Integration strategy: If you have ISO 27001, you can integrate ISO 42001 with ~50% less effort (shared processes). Many organisations pursue dual certification using a single Integrated Management System (IMS).
Can SignalBreak alone get me ISO 42001 certified?
No. SignalBreak provides ~60% of certification evidence (the data-intensive clauses), but certification requires:
What SignalBreak provides:
- ✅ Risk assessment records (Clause 6.1.3)
- ✅ AI system inventory (Clause 6.2.2)
- ✅ Third-party monitoring (Clause 8.3)
- ✅ Impact assessments (Clause 8.4)
- ✅ Monitoring data (Clause 9.1)
- ✅ Continual improvement evidence (Clause 10.2)
What you still need:
- ❌ Management commitment and policy (Clause 5.x)
- ❌ Competence and training records (Clause 7.x)
- ❌ Documented procedures (all clauses)
- ❌ Management review records (Clause 9.3)
- ❌ Corrective action records (Clause 10.x)
Think of SignalBreak as your "AI governance data platform" — it collects and analyses the data an AIMS needs. You still need the management system around it (policies, procedures, audits, reviews).
Analogy: SignalBreak is like a CRM for ISO 42001 — it manages your AI provider relationships and risks, but you still need a sales process. Similarly, SignalBreak manages AI governance data, but you still need governance processes.
How often should I generate evidence packs for ISO 42001?
Minimum: Quarterly (sufficient for certification) Recommended: Monthly (better trend data, board reporting) Best practice: Monthly + on-demand (before audits, after major changes)
Why monthly?
- Trend analysis requires 3+ data points — quarterly packs only give 4 data points/year
- Auditors expect recent evidence — quarterly packs could be 90 days old
- Board reporting typically monthly — align evidence pack to existing governance cadence
- Continuous improvement requires timely feedback — quarterly is too slow to iterate
Cost-benefit:
- Monthly evidence packs = 12 data points/year
- Quarterly evidence packs = 4 data points/year
- Additional cost: Negligible (automated generation)
- Additional value: 3x better trend visibility, always audit-ready
Exception: If you have <5 AI workflows with stable providers, quarterly may suffice. For >10 workflows or high-risk use cases, monthly is essential.
Can I use SignalBreak evidence packs in my Stage 1 audit?
Yes — and you should.
Stage 1 auditors will request:
- Evidence that your risk assessment process is applied (not just documented)
- Proof your AI system inventory is current (not a one-time spreadsheet)
- Demonstration that third-party monitoring is continuous (not annual vendor review)
SignalBreak evidence pack provides all three:
- ✅ Risk assessment applied: Decision Readiness Scorecard with date stamp
- ✅ Inventory current: Workflow registry with "Last Updated" timestamps
- ✅ Monitoring continuous: Provider health logs, signal detection records
Best practice: Include your most recent evidence pack as Appendix A to your Stage 1 documentation submission. Reference it throughout your AIMS procedures:
- "See Evidence Pack Section 2 for risk assessment results"
- "See Evidence Pack Appendix B for current AI inventory"
- "See Evidence Pack Section 4 for third-party monitoring records"
Auditor perspective: Evidence packs demonstrate operational maturity — you're not just creating documents for certification, you're actually running an AI governance process. This significantly increases auditor confidence.
What happens if my score is Red (70-100) during certification audit?
Short answer: Red status alone won't prevent certification, but auditors will scrutinise your risk treatment plans.
Longer answer: ISO 42001 doesn't mandate specific risk thresholds (there's no "you must be Green to certify"). Instead, auditors assess:
- Do you know your risks? ✅ (Red score shows you've assessed and quantified risks)
- Do you have a risk treatment plan? ⚠️ (Must show actions to reduce Red status)
- Is the plan being implemented? ⚠️ (Must show progress, not just intentions)
- Are controls effective? ⚠️ (If Red persists for 6+ months with no improvement, auditors will question effectiveness)
Certification implications:
| Scenario | Likely Audit Outcome |
|---|---|
| Red score + recent (this month) | ✅ OK — you just implemented AI, risk is expected |
| Red score + action plan + timeline | ✅ OK — demonstrating risk treatment process |
| Red score + 3+ months + improving trend | ✅ OK — showing continual improvement |
| Red score + 6+ months + no action plan | ❌ Major nonconformity — risk treatment not applied (Clause 8.2) |
| Red score + 6+ months + flat/worsening | ❌ Major nonconformity — AIMS not effective (Clause 10.2) |
Recommendation: If your score is Red, delay certification until Amber (30-69) or demonstrate clear downward trajectory (e.g., Red 85 → 70 → 55 over 3 months).
Why? First audit sets precedent. If you certify at Red 85, auditors will expect Green eventually. Better to certify at Amber 65 with upward momentum (harder to achieve, but stronger certification).
Does ISO 42001 require specific AI risk thresholds?
No. ISO 42001 is risk-based, not rule-based. It requires:
- Define your own risk criteria (what's acceptable risk for your organisation)
- Apply risk assessment against those criteria
- Treat unacceptable risks
- Monitor risk treatment effectiveness
This means:
- Startup with low-risk AI chatbot: Might define "acceptable" as Red <90
- Bank with credit-scoring AI: Might define "acceptable" as Green <20
SignalBreak aligns with this philosophy:
- Provides objective risk score (0-100)
- You define your risk appetite (where's your Red/Amber/Green threshold?)
- Evidence pack shows whether you're meeting your own criteria
Best practice: Document your risk appetite in AI Governance Policy:
- Example: "Our target risk state is Green (<30). We accept Amber (30-69) with active risk treatment plans. Red (>70) requires board escalation and expedited risk reduction."
Auditors will verify you're consistent with your own policy, not against an absolute standard.
Next Steps
Assess your current state:
- Generate your first SignalBreak evidence pack
- Review ISO 42001 clause mapping (p.X of evidence pack)
- Identify which clauses have full vs. partial coverage
Close certification gaps:
- Create stakeholder register (Clause 4.2 gap)
- Establish internal audit procedure (Clause 9.2 gap)
- Document policies and procedures (Section 5, 7 gaps)
Engage certification body:
- Request quotes from 2-3 accredited bodies
- Provide evidence pack as demonstration of maturity
- Expect 12-18 month timeline to certification
Establish governance rhythm:
- Monthly evidence pack generation
- Quarterly management reviews
- Annual internal audits
- 3-year certification cycle
Related Documentation
- Governance Overview — Comparison of ISO 42001, NIST AI RMF, EU AI Act
- NIST AI RMF Guide — Complementary risk framework
- EU AI Act Guide — Legal compliance requirements
- Evidence Packs Guide — How to generate and use evidence packs
- Risk Scoring Methodology — Understanding your score
External Resources
- ISO 42001 Standard: Purchase from ISO website (£150-200)
- BSI AI Governance Resources: https://www.bsigroup.com/en-GB/iso-42001-ai-management/
- UKAS Accredited Certification Bodies: https://www.ukas.com/find-an-organisation/
- ISO 42001 vs EU AI Act: European Commission guidance
Last updated: 2026-01-26Based on: ISO/IEC 42001:2023, SignalBreak Evidence Pack Framework v2.1