ISO 42001 Compliance Checklist
Overview
ISO/IEC 42001:2023 is the first international standard for Artificial Intelligence Management Systems (AIMS). Published in December 2023, it provides a framework for organizations to manage AI risks, ensure responsible AI use, and demonstrate compliance with AI governance requirements.
ISO 42001 is designed for organizations that:
- Develop, provide, or use AI systems
- Need to demonstrate responsible AI governance to customers, regulators, or stakeholders
- Want to certify their AI management practices
- Operate in regulated industries (finance, healthcare, government)
- Face AI-related risks that require systematic management
What ISO 42001 covers:
- AI system lifecycle management (design, development, deployment, monitoring, retirement)
- Risk assessment and mitigation for AI systems
- Data governance for AI training and operation
- Third-party AI provider management
- Incident response and continuous improvement
- Documentation and audit requirements
How SignalBreak supports ISO 42001 compliance:
SignalBreak is designed to help organizations meet ISO 42001 requirements for AI system monitoring, third-party AI provider management, and documentation. This checklist shows how SignalBreak addresses specific ISO 42001 controls.
How to Use This Checklist
- Assess current state: Review each requirement and mark your organization's status (Not Started, In Progress, Completed)
- Identify gaps: Focus on requirements marked "Not Started" or "In Progress"
- Prioritize actions: Start with high-risk AI systems and mandatory requirements
- Use SignalBreak: Leverage SignalBreak features to meet requirements more efficiently
- Document evidence: Maintain audit trail of compliance activities
- Prepare for audit: Use completed checklist as basis for certification audit
Checklist symbols:
- ✅ SignalBreak helps directly: Feature or report supports this requirement
- 📋 SignalBreak provides evidence: Platform generates documentation for audit
- ⚙️ Manual policy required: Organization must establish policy (SignalBreak enforces)
Clause 4: Context of the Organization
4.1 Understanding the Organization and Its Context
Requirement: Determine external and internal issues relevant to AI management.
What this means:
- Identify industries, regulations, and stakeholders affected by your AI use
- Understand organizational strategy, culture, and risk appetite for AI
- Document business context that shapes AI governance needs
Checklist:
[ ] External context documented:
- [ ] Industry and regulatory environment identified (e.g., healthcare → HIPAA, finance → SR 11-7)
- [ ] Customer and stakeholder expectations for AI understood
- [ ] Competitive landscape and AI adoption trends analyzed
- [ ] Legal and ethical frameworks applicable to AI identified
[ ] Internal context documented:
- [ ] Organizational AI strategy and objectives defined
- [ ] AI risk appetite and tolerance established
- [ ] Current AI capabilities and maturity assessed
- [ ] Resource constraints for AI governance identified
SignalBreak support:
- ✅ AI inventory: Discover all AI systems in use across organization (provides evidence of current AI landscape)
- 📋 Dashboard reports: Show AI provider dependencies and concentration risks (informs risk assessment)
4.2 Understanding the Needs and Expectations of Interested Parties
Requirement: Determine interested parties relevant to AI management and their requirements.
What this means:
- Identify stakeholders affected by AI (customers, employees, regulators, partners, public)
- Understand their expectations, concerns, and requirements for AI
- Document how AI impacts each stakeholder group
Checklist:
[ ] Interested parties identified:
- [ ] Customers and end-users of AI systems
- [ ] Employees whose work involves or is affected by AI
- [ ] Regulators and oversight bodies
- [ ] AI vendors and technology partners
- [ ] Investors and board members
- [ ] Advocacy groups and civil society (for high-impact AI)
[ ] Requirements documented for each party:
- [ ] Legal and regulatory requirements
- [ ] Contractual obligations
- [ ] Ethical expectations
- [ ] Performance and reliability requirements
- [ ] Transparency and explainability needs
SignalBreak support:
- 📋 Governance reports: Provide evidence of AI monitoring for stakeholder reporting
- ✅ Alert configuration: Notify stakeholders when AI incidents affect them
4.3 Determining the Scope of the AI Management System
Requirement: Define boundaries and applicability of the AI management system.
What this means:
- Specify which AI systems are covered by the management system
- Identify geographic locations, business units, or products in scope
- Document any exclusions and justify them
Checklist:
[ ] Scope defined and documented:
- [ ] AI systems in scope identified (production AI, pilot AI, research AI)
- [ ] Business units, locations, or products covered specified
- [ ] Exclusions documented with justification
- [ ] Scope reviewed and approved by leadership
[ ] Scope communicated:
- [ ] Employees aware of which AI is governed
- [ ] Vendors informed of governance requirements
- [ ] Audit scope aligned with AIMS scope
SignalBreak support:
- ✅ Organization structure: Define scope using scenarios (in-scope AI) and providers
- 📋 AI inventory: Automatically maintained list of AI systems in scope
4.4 AI Management System
Requirement: Establish, implement, maintain, and continually improve an AI management system.
What this means:
- Create documented processes for managing AI throughout its lifecycle
- Assign roles and responsibilities for AI governance
- Implement controls to manage AI risks
- Continuously monitor and improve the management system
Checklist:
[ ] AIMS established:
- [ ] AI governance framework documented (policies, processes, controls)
- [ ] Roles and responsibilities assigned (AI governance committee, AI owners, risk managers)
- [ ] Risk assessment methodology defined for AI
- [ ] AI lifecycle processes documented (design, development, deployment, monitoring, retirement)
[ ] AIMS implemented:
- [ ] Governance processes operational
- [ ] Controls deployed for AI systems in scope
- [ ] Monitoring and alerting configured
- [ ] Incident response capability established
[ ] AIMS maintained:
- [ ] Regular reviews conducted (quarterly or annually)
- [ ] Documentation kept current
- [ ] Changes managed systematically
- [ ] Audit findings addressed
SignalBreak support:
- ✅ Centralized platform: SignalBreak serves as AI governance system of record
- ✅ Continuous monitoring: Automatic signal detection and alerting (implements ongoing oversight)
- 📋 Evidence packs: Document governance activities for audit
Clause 5: Leadership
5.1 Leadership and Commitment
Requirement: Top management shall demonstrate leadership and commitment to the AI management system.
What this means:
- Senior leadership actively supports AI governance
- Resources allocated for AIMS implementation
- AI governance integrated into business strategy
- Leadership accountable for AI outcomes
Checklist:
[ ] Leadership engagement demonstrated:
- [ ] Executive sponsor assigned for AI governance
- [ ] AI governance included in board/executive agendas
- [ ] Budget allocated for AI governance tools and resources
- [ ] Leadership communicates importance of responsible AI
[ ] Accountability established:
- [ ] Leadership accountable for AI-related incidents
- [ ] AI governance performance reviewed in executive meetings
- [ ] Leadership champions AI ethics and responsibility
SignalBreak support:
- ✅ Executive dashboards: Provide leadership visibility into AI risks and incidents
- ✅ Critical signal alerts: Escalate urgent AI issues to leadership immediately
- 📋 Monthly governance reports: Show leadership engagement and oversight
5.2 Policy
Requirement: Top management shall establish an AI policy.
What this means:
- Written policy defines organization's commitment to responsible AI
- Policy covers risk management, ethics, compliance, and continuous improvement
- Policy communicated to all relevant parties
Checklist:
[ ] ⚙️ AI policy established (must be written by organization):
- [ ] Purpose and scope defined
- [ ] Commitment to compliance with legal, regulatory, and ethical requirements
- [ ] Commitment to managing AI risks
- [ ] Commitment to continuous improvement
- [ ] Accountability for AI outcomes
- [ ] Policy approved by top management
[ ] Policy communicated:
- [ ] Published to employees, contractors, and relevant vendors
- [ ] Training provided on policy requirements
- [ ] Policy accessible and understood by AI system owners
[ ] Policy maintained:
- [ ] Reviewed annually or when significant changes occur
- [ ] Updated to reflect new regulations, risks, or lessons learned
SignalBreak support:
- ⚙️ Policy enforcement: SignalBreak enforces monitoring and alerting policies (once defined)
- 📋 Compliance evidence: Reports show policy implementation (e.g., "all AI systems monitored as required")
5.3 Organizational Roles, Responsibilities, and Authorities
Requirement: Top management shall assign responsibilities and authorities for AI management.
What this means:
- Clear roles defined for AI governance (who does what)
- Accountability assigned for AI risk management, compliance, and incidents
- Authorities granted to make decisions about AI systems
Checklist:
[ ] Roles defined:
- [ ] AI Governance Committee or equivalent (oversight body)
- [ ] AI Risk Manager or equivalent (risk assessment and mitigation)
- [ ] AI System Owners (accountable for specific AI systems)
- [ ] Data Stewards (manage data used by AI)
- [ ] Compliance Officer (ensure regulatory adherence)
- [ ] Incident Response Team (handle AI-related incidents)
[ ] Responsibilities documented:
- [ ] Role descriptions written and communicated
- [ ] RACI matrix created (Responsible, Accountable, Consulted, Informed)
- [ ] Escalation paths defined
[ ] Authorities granted:
- [ ] Authority to pause or disable AI systems when risks emerge
- [ ] Authority to allocate resources for AI risk mitigation
- [ ] Authority to approve AI system deployments
SignalBreak support:
- ✅ Role-based access control: Admins, Members, and Viewers map to organizational roles
- ✅ Alert routing: Signals escalate to appropriate stakeholders based on role
- 📋 Audit trail: Documents who took what action (accountability)
Clause 6: Planning
6.1 Actions to Address Risks and Opportunities
Requirement: Plan actions to address risks and opportunities related to AI management.
What this means:
- Identify risks that AI systems create (bias, errors, outages, vendor failures)
- Identify opportunities to improve AI governance or outcomes
- Plan actions to mitigate risks and seize opportunities
Checklist:
[ ] Risks identified:
- [ ] AI system failures (outages, errors, degraded performance)
- [ ] Bias and discrimination in AI outputs
- [ ] Third-party AI provider risks (vendor concentration, model updates, security breaches)
- [ ] Data quality issues affecting AI performance
- [ ] Regulatory non-compliance risks
- [ ] Reputational risks from AI incidents
[ ] Opportunities identified:
- [ ] Automation and efficiency gains from AI
- [ ] Competitive advantage from responsible AI governance
- [ ] Cost savings from proactive risk management
[ ] Actions planned and implemented:
- [ ] Risk mitigation controls deployed
- [ ] Monitoring and alerting configured
- [ ] Fallback providers configured for critical AI
- [ ] Continuous improvement initiatives launched
SignalBreak support:
- ✅ Risk detection: Signals automatically identify AI risks (outages, model updates, concentration)
- ✅ Fallback providers: Configure alternatives to mitigate vendor risk
- 📋 Risk reports: Document identified risks and mitigation status
6.2 AI Objectives and Planning to Achieve Them
Requirement: Establish AI objectives and plan how to achieve them.
What this means:
- Define measurable goals for AI governance (e.g., "100% of production AI monitored")
- Create action plans with timelines, resources, and responsibilities
- Track progress toward objectives
Checklist:
[ ] Objectives defined:
- [ ] AI governance maturity goals (e.g., "achieve ISO 42001 certification by Q4")
- [ ] Risk reduction targets (e.g., "reduce AI vendor concentration to <50%")
- [ ] Compliance objectives (e.g., "meet all OMB AI requirements")
- [ ] Performance targets (e.g., "AI incident response time <1 hour")
[ ] Planning completed:
- [ ] Action plans created for each objective
- [ ] Resources allocated (budget, staff, tools)
- [ ] Timelines established with milestones
- [ ] Responsibilities assigned
[ ] Progress tracked:
- [ ] Objectives reviewed quarterly
- [ ] Metrics tracked and reported
- [ ] Adjustments made when objectives at risk
SignalBreak support:
- 📋 Maturity tracking: Dashboard shows progress toward governance objectives
- ✅ Metrics: Provider concentration, signal response times, scenario coverage (track performance)
Clause 7: Support
7.1 Resources
Requirement: Determine and provide resources needed for the AI management system.
What this means:
- Allocate budget for AI governance tools, training, and personnel
- Ensure sufficient staff with appropriate skills
- Provide infrastructure (platforms, software, data) for AI governance
Checklist:
[ ] Resources allocated:
- [ ] Budget for AI governance platform (SignalBreak subscription)
- [ ] Staff assigned to AI governance roles (full-time or part-time)
- [ ] Infrastructure provisioned (cloud resources, data storage, integration tools)
[ ] Resource adequacy assessed:
- [ ] Resources sufficient to meet objectives
- [ ] Gaps identified and addressed
- [ ] Resource allocation reviewed annually
SignalBreak support:
- ✅ Affordable platform: SignalBreak provides AI governance infrastructure at predictable cost
- ✅ Low maintenance: SaaS platform requires minimal IT resources to operate
7.2 Competence
Requirement: Ensure personnel are competent based on education, training, or experience.
What this means:
- AI governance staff have necessary skills (risk management, AI technology, compliance)
- Training provided to close competence gaps
- Competence documented and verified
Checklist:
[ ] Competence requirements defined:
- [ ] Skills needed for each AI governance role identified
- [ ] Minimum education, training, or experience specified
[ ] Competence assessed:
- [ ] Current competence evaluated for each person in AI governance roles
- [ ] Gaps identified
[ ] Training provided:
- [ ] AI governance training for all staff in scope
- [ ] Role-specific training (e.g., risk assessment for AI Risk Managers)
- [ ] External training or certifications provided when needed
[ ] Competence documented:
- [ ] Training records maintained
- [ ] Certifications tracked
SignalBreak support:
- ✅ User-friendly interface: Reduces training burden (intuitive dashboard and workflows)
- 📋 Built-in help: Documentation and tooltips guide users
7.3 Awareness
Requirement: Ensure personnel are aware of the AI policy, their role, and the consequences of non-conformity.
What this means:
- All staff involved with AI understand the organization's AI policy
- Staff know their responsibilities for AI governance
- Staff understand risks of AI misuse or governance failures
Checklist:
[ ] Awareness program established:
- [ ] AI policy communicated to all relevant staff
- [ ] Role-specific responsibilities explained
- [ ] Consequences of non-compliance communicated
- [ ] Examples of AI risks and incidents shared
[ ] Awareness maintained:
- [ ] New hires briefed on AI governance during onboarding
- [ ] Periodic refreshers provided (annually or after incidents)
- [ ] Awareness verified (quizzes, acknowledgments, assessments)
SignalBreak support:
- ✅ Signal alerts: Keep team aware of AI risks in real-time
- 📋 Incident reports: Share lessons learned from AI incidents across team
7.4 Communication
Requirement: Determine internal and external communications relevant to the AI management system.
What this means:
- Define what, when, how, and to whom AI governance information is communicated
- Establish channels for internal communication (team, leadership) and external (customers, regulators, public)
- Ensure timely communication during incidents
Checklist:
[ ] Communication plan established:
- [ ] Internal communication: Team updates, leadership briefings, incident escalation
- [ ] External communication: Customer notifications, regulatory reporting, public statements
- [ ] Timing defined (routine updates, incident-driven alerts)
- [ ] Channels identified (email, dashboards, reports, meetings)
[ ] Communication implemented:
- [ ] Routine AI governance updates provided (monthly reports, dashboards)
- [ ] Incident communication protocols activated when needed
- [ ] Stakeholder feedback solicited and incorporated
SignalBreak support:
- ✅ Email alerts: Automatically notify team when critical signals detected
- ✅ Dashboards: Provide real-time visibility into AI risks for internal stakeholders
- 📋 Reports and evidence packs: Support external communication (customers, regulators)
7.5 Documented Information
Requirement: The AI management system shall include documented information required by ISO 42001 and determined necessary by the organization.
What this means:
- Maintain policies, procedures, records, and evidence for AI governance
- Control documents (versioning, approval, distribution)
- Retain records for audit and compliance
Checklist:
[ ] Required documentation created:
- [ ] AI policy
- [ ] Scope of AIMS
- [ ] Risk assessment methodology
- [ ] AI system inventory
- [ ] Roles and responsibilities
- [ ] Objectives and plans
- [ ] Operational procedures (AI lifecycle, incident response)
- [ ] Monitoring and measurement processes
- [ ] Audit and review records
[ ] Document control implemented:
- [ ] Version control for policies and procedures
- [ ] Approval workflow before publication
- [ ] Distribution to relevant parties
- [ ] Obsolete documents removed
[ ] Records retained:
- [ ] Audit trails of AI governance activities
- [ ] Risk assessments and mitigation actions
- [ ] Incident reports and responses
- [ ] Training records
- [ ] Audit findings and corrective actions
- [ ] Retention periods defined and enforced
SignalBreak support:
- 📋 Automatic documentation: Platform generates AI inventory, signal logs, incident reports
- 📋 Evidence packs: Compile documentation for audit in one click
- ✅ Version tracking: Logs AI provider model updates (audit trail)
Clause 8: Operation
8.1 Operational Planning and Control
Requirement: Plan, implement, and control processes needed to meet AI management requirements.
What this means:
- Define processes for AI system lifecycle (design, development, deployment, monitoring, retirement)
- Establish criteria for AI system approval and changes
- Control outsourced processes (third-party AI providers)
Checklist:
[ ] Processes documented:
- [ ] AI system development lifecycle (requirements, design, testing, approval)
- [ ] AI system deployment process (production release, rollback)
- [ ] AI system monitoring (performance, bias, incidents)
- [ ] AI system change management (updates, model changes)
- [ ] AI system retirement (decommissioning, data disposal)
[ ] Criteria established:
- [ ] Approval criteria for new AI systems (risk level, testing, documentation)
- [ ] Change approval criteria (impact assessment, testing, rollback plan)
[ ] Outsourced processes controlled:
- [ ] Third-party AI providers identified and assessed
- [ ] Vendor contracts include governance requirements
- [ ] Vendor performance monitored
- [ ] Vendor incidents escalated and resolved
SignalBreak support:
- ✅ Scenario monitoring: Tracks AI systems through lifecycle (operational oversight)
- ✅ Provider management: Monitors third-party AI vendors (outsourced process control)
- 📋 Change tracking: Logs AI provider model updates (change management evidence)
8.2 AI System Impact Assessment
Requirement: Conduct impact assessments for AI systems before deployment.
What this means:
- Assess potential impacts of AI systems on individuals, organizations, and society
- Identify risks (bias, discrimination, privacy, safety, transparency)
- Document mitigation measures
- Reassess when AI systems change significantly
Checklist:
[ ] Impact assessment process defined:
- [ ] Template or framework for assessments (e.g., algorithmic impact assessment, DPIA)
- [ ] Trigger criteria (when assessments required)
- [ ] Roles and responsibilities (who conducts, reviews, approves)
[ ] Assessments conducted for in-scope AI:
- [ ] High-risk AI assessed before deployment
- [ ] Impacts on individuals documented (rights, fairness, privacy, safety)
- [ ] Impacts on organization documented (legal, financial, reputational)
- [ ] Impacts on society considered (ethics, equity, environment)
- [ ] Mitigation measures identified and implemented
[ ] Reassessments triggered when needed:
- [ ] Significant changes to AI system (model updates, data changes, new use cases)
- [ ] Incidents or near-misses
- [ ] Regulatory changes
- [ ] Periodic review (annually or as defined)
SignalBreak support:
- ✅ Change detection: Alerts when AI providers update models (triggers reassessment)
- 📋 Documentation: Evidence packs include AI provider info for impact assessments
- ✅ Risk signals: Detect performance degradation or incidents (trigger reassessment)
8.3 Data Management
Requirement: Manage data throughout the AI system lifecycle.
What this means:
- Ensure data used for AI is appropriate, lawful, and of sufficient quality
- Protect data privacy and security
- Document data sources, processing, and retention
Checklist:
[ ] Data governance established:
- [ ] Data sources identified for each AI system
- [ ] Data quality standards defined (accuracy, completeness, timeliness)
- [ ] Data privacy requirements documented (consent, lawful basis, minimization)
- [ ] Data security controls implemented (encryption, access control)
[ ] Data managed throughout lifecycle:
- [ ] Training data: Source, quality, bias assessment, retention
- [ ] Operational data: Inputs to AI, outputs from AI, user interactions
- [ ] Metadata: Data lineage, processing history, audit logs
[ ] Data issues addressed:
- [ ] Data quality monitoring (detect drift, errors, anomalies)
- [ ] Data privacy incidents handled (breach response, notification)
- [ ] Data bias identified and mitigated
SignalBreak support:
- ⚙️ Third-party data governance: SignalBreak helps monitor AI providers, but data governance for training/operational data is organization's responsibility
- 📋 Documentation: Evidence packs show which AI providers process which data (data flow mapping)
8.4 AI System Development and Maintenance
Requirement: Control the development and maintenance of AI systems.
What this means:
- Apply secure development practices to AI systems
- Test AI for functionality, performance, bias, and security
- Manage AI system updates and changes
- Maintain documentation of AI system design and operation
Checklist:
[ ] Development controlled:
- [ ] Requirements defined for each AI system
- [ ] Design documented (architecture, data flows, algorithms)
- [ ] Testing performed (functional, performance, bias, security)
- [ ] Approval required before production deployment
[ ] Maintenance controlled:
- [ ] Change management process applied to AI updates
- [ ] Testing performed before deploying updates
- [ ] Rollback plan documented
- [ ] Changes logged and traceable
[ ] Documentation maintained:
- [ ] AI system documentation kept current
- [ ] Version history tracked
- [ ] Configuration management applied
SignalBreak support:
- ✅ Model update tracking: Logs when AI providers release new versions (change management)
- ✅ Alerts: Notify when AI providers update models (triggers testing and approval workflow)
- 📋 Audit trail: Documents which AI model versions were deployed when
8.5 AI System Operation and Monitoring
Requirement: Operate and monitor AI systems to ensure they function as intended.
What this means:
- Monitor AI system performance, accuracy, and fairness in production
- Detect anomalies, errors, and degradation
- Respond to incidents promptly
- Collect feedback from users and stakeholders
Checklist:
[ ] Monitoring implemented:
- [ ] Performance monitoring (uptime, response time, throughput)
- [ ] Accuracy monitoring (output quality, error rates)
- [ ] Fairness monitoring (bias, disparate impact)
- [ ] Security monitoring (unauthorized access, data breaches)
[ ] Alerting configured:
- [ ] Thresholds defined for critical metrics
- [ ] Alerts sent to responsible parties
- [ ] Escalation procedures for urgent issues
[ ] Incident response capability:
- [ ] Incident response plan documented
- [ ] Roles and responsibilities assigned
- [ ] Incident logging and tracking
- [ ] Post-incident reviews conducted
[ ] Feedback collected:
- [ ] User feedback mechanisms (surveys, complaints, support tickets)
- [ ] Stakeholder feedback solicited (customers, regulators, advocates)
- [ ] Feedback analyzed and acted upon
SignalBreak support:
- ✅ Continuous monitoring: Automatic signal detection for AI provider issues
- ✅ Alerting: Email notifications for critical signals
- ✅ Incident tracking: Signal logs provide audit trail of incidents
- 📋 Reports: Monthly governance reports show monitoring coverage and signal trends
8.6 Human Oversight
Requirement: Implement human oversight of AI systems where appropriate.
What this means:
- AI decisions with significant impacts require meaningful human review
- Humans can understand, question, and override AI recommendations
- Clear escalation paths from AI to human decision-makers
Checklist:
[ ] Oversight requirements defined:
- [ ] Identify which AI systems require human oversight (high-risk, rights-impacting)
- [ ] Define level of oversight (human-in-the-loop, human-on-the-loop, human-in-command)
- [ ] Specify when human review is triggered
[ ] Oversight implemented:
- [ ] Human review workflows designed and deployed
- [ ] Humans trained to review AI outputs critically
- [ ] Humans empowered to override AI when appropriate
- [ ] Escalation paths documented
[ ] Oversight documented:
- [ ] Human review decisions logged
- [ ] Override reasons documented
- [ ] Effectiveness of oversight evaluated
SignalBreak support:
- ⚙️ Human oversight: SignalBreak supports humans monitoring AI providers, but human review of individual AI decisions is implemented in application logic
- 📋 Audit trail: Documents when humans intervened (e.g., paused AI usage, switched providers)
8.7 AI System Supply Chain Management
Requirement: Manage AI-related third parties throughout the supply chain.
What this means:
- Identify and assess third-party AI providers, data providers, and service providers
- Establish requirements in contracts (performance, security, governance)
- Monitor vendor performance and compliance
- Manage vendor risks (concentration, failures, security breaches)
Checklist:
[ ] Vendors identified and assessed:
- [ ] AI providers cataloged (OpenAI, Anthropic, Google, etc.)
- [ ] Data providers identified (training data, external APIs)
- [ ] Service providers identified (cloud hosting, model serving, MLOps)
- [ ] Risk assessment conducted for each vendor
[ ] Vendor requirements defined:
- [ ] Performance standards (uptime, response time, accuracy)
- [ ] Security requirements (data protection, access control, breach notification)
- [ ] Governance requirements (model documentation, bias testing, transparency)
- [ ] Audit rights (ability to review vendor practices)
[ ] Contracts include requirements:
- [ ] Requirements incorporated into vendor agreements
- [ ] Liability and indemnification addressed
- [ ] Termination and transition provisions
[ ] Vendor performance monitored:
- [ ] Regular performance reviews
- [ ] Incident tracking and escalation
- [ ] Non-compliance addressed
- [ ] Vendor risks managed (concentration, financial health, geopolitical)
SignalBreak support:
- ✅ Vendor monitoring: Real-time tracking of AI provider performance and incidents
- ✅ Concentration risk: Reports show dependency on each vendor
- ✅ Fallback providers: Configure alternatives to mitigate vendor risk
- 📋 Vendor reports: Document vendor performance for contract reviews and audits
Clause 9: Performance Evaluation
9.1 Monitoring, Measurement, Analysis, and Evaluation
Requirement: Monitor and measure the performance of the AI management system.
What this means:
- Define metrics to assess AIMS effectiveness (e.g., incident response time, risk reduction)
- Collect data regularly
- Analyze trends and identify improvement opportunities
Checklist:
[ ] Metrics defined:
- [ ] AI governance coverage (% of AI systems monitored)
- [ ] Risk metrics (open risks, risk reduction rate)
- [ ] Incident metrics (incident count, response time, resolution time)
- [ ] Compliance metrics (% of impact assessments completed, audit findings)
- [ ] Vendor metrics (provider uptime, model update frequency)
[ ] Data collected:
- [ ] Automated data collection where possible (SignalBreak logs)
- [ ] Manual data collection for qualitative metrics
- [ ] Data stored securely and retained per policy
[ ] Analysis performed:
- [ ] Monthly or quarterly metric reviews
- [ ] Trends identified (improving, stable, degrading)
- [ ] Root causes analyzed for adverse trends
- [ ] Improvement actions identified
[ ] Results communicated:
- [ ] Metrics reported to leadership
- [ ] Metrics shared with AI governance team
- [ ] Results inform planning and improvement
SignalBreak support:
- 📋 Metrics dashboard: Shows key governance metrics (signal count, provider concentration, scenario coverage)
- 📋 Reports: Monthly governance reports provide trend analysis
- ✅ Data export: Export signal logs for custom analysis
9.2 Internal Audit
Requirement: Conduct internal audits of the AI management system.
What this means:
- Periodically audit compliance with ISO 42001 requirements
- Evaluate effectiveness of controls and processes
- Identify non-conformities and improvement opportunities
Checklist:
[ ] Audit program established:
- [ ] Audit schedule defined (e.g., annual full audit, quarterly focused audits)
- [ ] Audit scope covers all ISO 42001 clauses
- [ ] Audit criteria defined (ISO 42001 requirements, organizational policies)
- [ ] Auditors assigned (internal staff or external consultants)
[ ] Audits conducted:
- [ ] Audit plan created for each audit
- [ ] Evidence reviewed (documentation, records, interviews)
- [ ] Findings documented (conformities, non-conformities, observations)
- [ ] Audit report issued to management
[ ] Findings addressed:
- [ ] Non-conformities prioritized by severity
- [ ] Corrective actions planned and implemented
- [ ] Effectiveness of corrections verified
- [ ] Audit results inform management review
SignalBreak support:
- 📋 Evidence packs: Compile documentation for auditors in one click
- 📋 Audit trail: Complete logs of AI governance activities (signal detection, triage, resolution)
- ✅ Compliance reporting: Show coverage of AI systems, signal response, and policy adherence
9.3 Management Review
Requirement: Top management shall review the AI management system at planned intervals.
What this means:
- Leadership reviews AIMS performance and suitability
- Input includes audit results, incidents, metrics, and stakeholder feedback
- Output includes decisions on improvements, resource allocation, and policy changes
Checklist:
[ ] Review schedule established:
- [ ] Frequency defined (e.g., quarterly or semi-annually)
- [ ] Attendees identified (executive sponsor, AI governance committee, risk managers)
[ ] Review inputs prepared:
- [ ] Status of previous management review actions
- [ ] Changes in external and internal issues affecting AIMS
- [ ] Performance metrics and trends
- [ ] Audit findings and corrective actions
- [ ] Incidents and lessons learned
- [ ] Feedback from stakeholders
- [ ] Opportunities for improvement
[ ] Review conducted:
- [ ] Management review meeting held
- [ ] Inputs discussed and decisions made
- [ ] Minutes documented
[ ] Review outputs acted upon:
- [ ] Improvement opportunities prioritized
- [ ] Resources allocated for improvements
- [ ] Policy updates approved
- [ ] Actions assigned with owners and timelines
SignalBreak support:
- 📋 Management review pack: Generate summary reports for leadership review (metrics, incidents, risks)
- ✅ Trend analysis: Dashboards show performance over time (input for review)
Clause 10: Improvement
10.1 Continual Improvement
Requirement: Continually improve the suitability, adequacy, and effectiveness of the AI management system.
What this means:
- Use insights from monitoring, audits, incidents, and reviews to improve AIMS
- Implement improvements systematically
- Measure effectiveness of improvements
Checklist:
[ ] Improvement culture established:
- [ ] Leadership encourages continuous improvement
- [ ] Team empowered to propose improvements
- [ ] Improvements recognized and celebrated
[ ] Improvements identified:
- [ ] From incident post-mortems (lessons learned)
- [ ] From audit findings
- [ ] From management reviews
- [ ] From employee suggestions
[ ] Improvements implemented:
- [ ] Improvement proposals evaluated and prioritized
- [ ] Actions planned with owners and timelines
- [ ] Changes deployed and communicated
- [ ] Effectiveness measured
SignalBreak support:
- ✅ Signal trends: Identify recurring issues requiring systemic improvement
- 📋 Incident reports: Support post-mortem analysis and lesson capture
- ✅ Platform updates: SignalBreak continuously improves with new features and signals
10.2 Nonconformity and Corrective Action
Requirement: When nonconformity occurs, take action to control and correct it, and deal with the consequences.
What this means:
- Identify when ISO 42001 requirements are not met (audit findings, incidents, policy violations)
- Correct the nonconformity
- Analyze root cause and implement corrective action to prevent recurrence
Checklist:
[ ] Nonconformity process defined:
- [ ] How nonconformities are identified and logged
- [ ] Who is responsible for investigating and correcting
- [ ] Timelines for correction
[ ] Nonconformities addressed:
- [ ] Immediate action taken to control consequences
- [ ] Root cause analyzed (5 Whys, fishbone diagram, etc.)
- [ ] Corrective action planned to prevent recurrence
- [ ] Corrective action implemented
- [ ] Effectiveness verified
[ ] Records maintained:
- [ ] Nonconformity log
- [ ] Root cause analysis documentation
- [ ] Corrective action plans and status
- [ ] Verification of effectiveness
SignalBreak support:
- ✅ Signal tracking: Log nonconformities (e.g., missed AI outages, delayed response)
- 📋 Audit trail: Document corrective actions taken
- ✅ Alerts: Prevent recurrence by configuring alerts for early detection
ISO 42001 Implementation Roadmap
Phase 1: Foundation (Months 1-2)
Goal: Establish basic AI governance structure and documentation.
Actions:
- Assign executive sponsor and form AI governance committee
- Define scope of AI management system
- Conduct initial AI inventory (identify all AI systems in use)
- Draft AI policy
- Deploy SignalBreak and configure monitoring for top 10 AI systems
Deliverables:
- AI policy (draft)
- Scope statement
- AI inventory
- SignalBreak deployed with initial scenarios
Phase 2: Risk Assessment (Months 3-4)
Goal: Assess AI risks and conduct impact assessments for high-risk systems.
Actions:
- Develop impact assessment template
- Conduct impact assessments for high-risk AI
- Identify third-party AI provider risks
- Document risk mitigation plans
- Configure fallback providers in SignalBreak for critical AI
Deliverables:
- Impact assessments for top 10 AI systems
- AI risk register
- Vendor risk assessments
- Fallback provider configurations
Phase 3: Operational Controls (Months 5-6)
Goal: Implement operational processes and controls for AI management.
Actions:
- Document AI lifecycle processes (development, deployment, monitoring, retirement)
- Establish human oversight procedures for high-risk AI
- Configure comprehensive monitoring in SignalBreak (all in-scope AI)
- Implement incident response process
- Train team on AI governance processes
Deliverables:
- AI lifecycle process documentation
- Human oversight procedures
- Complete SignalBreak monitoring (all in-scope AI)
- Incident response playbook
- Training completion records
Phase 4: Monitoring & Audit (Months 7-9)
Goal: Establish performance monitoring and prepare for certification audit.
Actions:
- Define AIMS performance metrics
- Conduct internal audit using this checklist
- Address audit findings with corrective actions
- Conduct management review
- Generate evidence packs for certification audit
Deliverables:
- AIMS metrics dashboard
- Internal audit report
- Corrective action plans
- Management review minutes
- Certification audit evidence packs
Phase 5: Certification (Months 10-12)
Goal: Achieve ISO 42001 certification.
Actions:
- Select accredited certification body
- Conduct Stage 1 audit (documentation review)
- Address Stage 1 findings
- Conduct Stage 2 audit (on-site assessment)
- Address any remaining findings
- Receive ISO 42001 certificate
Deliverables:
- Stage 1 audit report
- Stage 2 audit report
- ISO 42001 certificate
Certification Audit Tips
What Auditors Will Review
- Documentation: Policies, procedures, records, evidence of implementation
- Interviews: Staff understanding of AI governance roles and responsibilities
- Observations: Auditors observe processes in action (e.g., signal triage, incident response)
- Systems: Review of SignalBreak configuration and reports
Common Audit Findings (and How to Avoid Them)
| Finding | Root Cause | Prevention |
|---|---|---|
| Incomplete AI inventory | Decentralized AI adoption, no discovery process | Use SignalBreak to continuously discover AI across organization |
| Missing impact assessments | No trigger criteria or process | Define when impact assessments required, assign ownership |
| Inadequate vendor monitoring | No governance platform or manual tracking | Deploy SignalBreak for automated vendor monitoring |
| No evidence of monitoring | Monitoring done but not documented | Use SignalBreak evidence packs to compile audit trail |
| Unclear roles and responsibilities | RACI not defined or communicated | Document roles, assign in SignalBreak (Admins, Members, Viewers) |
| Incident response not tested | Plan exists but never exercised | Conduct tabletop exercise simulating AI outage or bias incident |
| Management review not conducted | Leadership not engaged | Schedule recurring management reviews, use SignalBreak reports as input |
Questions Auditors Often Ask
"How do you know about all the AI systems in your organization?"
- Answer: "We use SignalBreak to continuously monitor AI across scenarios, and we maintain an AI inventory updated quarterly."
"How do you manage third-party AI provider risks?"
- Answer: "SignalBreak monitors AI provider performance, detects incidents, and alerts us to risks. We configure fallback providers to reduce concentration risk."
"What happens when an AI provider updates their model?"
- Answer: "SignalBreak alerts us when model updates occur. We then trigger an impact reassessment and test the new model before deploying to production."
"Can you show me evidence of human oversight for this high-risk AI system?"
- Answer: "Yes, here's our human review workflow documentation, and here are logs from SignalBreak showing when humans intervened."
"How do you know your AI management system is effective?"
- Answer: "We track metrics like signal response time, risk reduction, and audit findings. Our management reviews these metrics quarterly and drive improvements."
SignalBreak Features Supporting ISO 42001
| ISO 42001 Requirement | SignalBreak Feature | How It Helps |
|---|---|---|
| 4.3: Scope definition | Organization & scenarios | Define which AI systems are in scope for governance |
| 4.4: AI inventory | Scenario list & provider bindings | Automatically maintain list of AI systems and providers |
| 6.1: Risk identification | Signal detection | Automatically detect AI risks (outages, model updates, concentration) |
| 6.1: Risk mitigation | Fallback providers | Configure alternatives to mitigate vendor risk |
| 7.4: Communication | Email alerts & dashboards | Notify stakeholders when AI risks emerge |
| 7.5: Documentation | Evidence packs | Compile governance documentation for audit |
| 8.1: Operational control | Scenario monitoring | Continuous oversight of AI systems in production |
| 8.2: Impact assessment trigger | Model update alerts | Alert when AI changes, triggering reassessment |
| 8.5: AI monitoring | Continuous signal detection | Monitor AI provider performance and incidents |
| 8.7: Vendor management | Provider tracking | Monitor third-party AI vendor performance and compliance |
| 9.1: Performance metrics | Dashboard & reports | Track governance metrics (coverage, signal count, response time) |
| 9.2: Internal audit evidence | Evidence packs & audit trail | Provide complete documentation for auditors |
| 9.3: Management review inputs | Monthly governance reports | Summary reports for leadership review |
| 10.1: Continuous improvement | Signal trends | Identify recurring issues requiring systemic improvement |
Next Steps
- Download this checklist: Use it to assess your current ISO 42001 readiness
- Conduct gap analysis: Identify which requirements are not yet met
- Deploy SignalBreak: Start with AI inventory and monitoring to address foundational requirements
- Prioritize actions: Focus on high-risk AI systems and mandatory requirements first
- Track progress: Update checklist regularly and report to leadership
- Engage certification body: When ready, select accredited auditor and schedule certification audit
Related Documentation
- NIST AI RMF Checklist - U.S. AI risk management framework
- EU AI Act Checklist - European AI regulation compliance
- Government AI Governance - Public sector AI governance guide
- Financial Services AI Governance - Banking AI governance
- Healthcare AI Governance - Clinical AI governance
Support
- Documentation: Help Center
- Email: support@signal-break.com
- ISO 42001 consulting: certification@signal-break.com