EU AI Act Compliance Checklist
Overview
The European Union Artificial Intelligence Act (EU AI Act) is the world's first comprehensive legal framework for artificial intelligence. Adopted in June 2024 and entered into force in August 2024, it establishes harmonized rules for the development, placement on the market, and use of AI systems in the EU.
Timeline:
- August 2, 2024: Regulation entered into force
- February 2, 2025: Prohibited AI systems banned
- August 2, 2025: Codes of practice and governance rules apply
- August 2, 2026: General obligations for high-risk AI apply (main deadline)
- August 2, 2027: Obligations for high-risk AI in products covered by existing EU legislation apply
Who must comply:
- Providers: Organizations that develop or place AI systems on the EU market (includes non-EU providers)
- Deployers: Organizations that use AI systems under their authority (EU establishments and non-EU when output used in EU)
- Distributors, importers, and product manufacturers: Supply chain actors
- General-purpose AI (GPAI) providers: OpenAI, Anthropic, Google, etc.
What it regulates:
- Prohibited AI: Unacceptable risk (social scoring, subliminal manipulation, biometric categorization based on sensitive attributes)
- High-risk AI: Significant risk to health, safety, fundamental rights (employment, credit, law enforcement, critical infrastructure)
- Limited-risk AI: Transparency obligations (chatbots, emotion recognition, deepfakes)
- Minimal-risk AI: No specific obligations (most AI systems)
How SignalBreak helps:
SignalBreak supports EU AI Act compliance by:
- Monitoring third-party AI providers (including GPAI providers like OpenAI, Anthropic)
- Creating audit trails showing AI system monitoring and incident response
- Detecting AI model updates that may require conformity reassessment
- Providing documentation for compliance reporting and audits
How to Use This Checklist
- Determine your role: Are you a Provider (develop/sell AI), Deployer (use AI), or both?
- Classify your AI systems: Prohibited, High-risk, Limited-risk, or Minimal-risk
- Review applicable obligations: Focus on sections relevant to your role and AI classification
- Assess compliance gaps: Identify requirements not yet met
- Implement controls: Use SignalBreak and organizational policies to address gaps
- Prepare for audits: Maintain documentation for conformity assessment and enforcement
Checklist symbols:
- ✅ SignalBreak helps directly: Platform feature supports this obligation
- 📋 SignalBreak provides evidence: Platform generates documentation for compliance
- ⚙️ Organization must implement: Policy, process, or technical control (SignalBreak supports monitoring)
- ⚠️ Critical requirement: Non-compliance leads to highest fines (up to €35M or 7% global turnover)
Step 1: Determine Your Role Under the EU AI Act
Are You a Provider?
You are a Provider if you:
- Develop an AI system and place it on the EU market (under your name or trademark)
- Develop an AI system for your own use (in-house development for in-house use)
- Substantially modify an AI system placed on the market (you become co-provider)
- Put your name/trademark on an AI system developed by others
Examples:
- Software company developing AI-powered CRM system for EU customers
- Bank developing fraud detection AI for internal use
- Consultancy substantially customizing OpenAI's API for client deployment
Key obligations for Providers:
- Conformity assessment and CE marking (for high-risk AI)
- Risk management system
- Data governance and technical documentation
- Transparency and human oversight
- Post-market monitoring and incident reporting
- Registration in EU database
Are You a Deployer?
You are a Deployer if you:
- Use an AI system under your authority (as a tool in your operations)
- Do NOT develop or substantially modify the AI
Examples:
- Retailer using AI chatbot from vendor for customer service
- HR department using AI recruitment tool from SaaS provider
- Hospital using AI diagnostic tool from medical device manufacturer
Key obligations for Deployers:
- Use AI in accordance with instructions
- Human oversight during AI use
- Monitor AI for risks and incidents
- Inform Provider and authorities of serious incidents
- Data protection impact assessment (if required)
- Transparency (inform affected persons about AI use)
Both Provider and Deployer?
Many organizations are both:
- Provider for AI systems you develop/sell to others
- Deployer for AI systems you purchase/use from vendors
Example: Bank develops fraud detection AI (Provider role) and also uses OpenAI's GPT-4 for customer chatbots (Deployer role).
Implication: You must comply with obligations for both roles, depending on the AI system.
Step 2: Classify Your AI Systems
Risk Classification Framework
The EU AI Act uses a risk-based approach. Each AI system must be classified into one of four categories:
1. Prohibited AI (Unacceptable Risk) ⚠️
What it is: AI systems that pose unacceptable risk to fundamental rights and are banned in the EU.
Examples:
- Social scoring by public authorities or on their behalf
- Subliminal manipulation causing harm (e.g., toys using voice assistant encouraging dangerous behavior)
- Exploitation of vulnerabilities (age, disability, socioeconomic situation)
- Biometric categorization based on sensitive attributes (race, political opinions, religion, sexual orientation)
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- Emotion recognition in workplace or education (with exceptions)
- Untargeted scraping of biometric data from internet/CCTV
Checklist:
- [ ] ⚠️ Review all AI systems for prohibited uses
- [ ] ⚠️ Immediately discontinue any prohibited AI (deadline: February 2, 2025)
- [ ] ⚠️ Document discontinuation (evidence of compliance)
SignalBreak support:
- ⚙️ Prohibited AI: Organization must ensure no prohibited AI is used
- 📋 Documentation: Evidence packs support compliance documentation
2. High-Risk AI
What it is: AI systems that pose significant risk to health, safety, or fundamental rights and are subject to strict requirements.
High-risk AI categories (Annex III):
- Biometrics: Remote biometric identification, biometric categorization (sensitive attributes), emotion recognition (law enforcement/border/workplace/education)
- Critical infrastructure: Safety component of infrastructure (water, gas, electricity, transport)
- Education and training: Determining access or assigning persons to educational institutions, assessing learning outcomes, detecting cheating, evaluating comprehension
- Employment: Recruiting, screening, promotion, task allocation, monitoring/evaluation, termination
- Access to essential services: Creditworthiness, risk assessment for insurance, emergency services dispatch, eligibility for benefits
- Law enforcement: Assessing risk of offending, polygraphs, evaluating evidence, assessing risk of victimization, profiling in investigations
- Migration and border control: Polygraphs, assessing risk to public security/health/migration, verifying travel documents, assessing applications
- Administration of justice: Assisting judicial authorities in researching/interpreting facts and law
Examples:
- AI recruiting tool screening job candidates
- AI credit scoring for loan decisions
- AI detecting fraudulent insurance claims
- AI-powered medical diagnostic support
- AI optimizing power grid operations
- AI proctoring for online exams
Checklist:
- [ ] Identify high-risk AI systems in your organization
- [ ] Classify each as Provider or Deployer role
- [ ] Apply relevant obligations (see sections below)
SignalBreak support:
- ✅ AI inventory: Identify and track high-risk AI systems via scenarios
- 📋 Documentation: Evidence packs for conformity assessment
3. Limited-Risk AI (Transparency Obligations)
What it is: AI systems that require transparency so users can make informed decisions.
Categories requiring transparency:
- AI interacting with humans: Chatbots, virtual assistants (users must be informed they're interacting with AI, unless obvious)
- Emotion recognition systems: Users must be informed when AI is used to detect emotions
- Biometric categorization systems: Users must be informed when AI categorizes them based on biometric data
- Deep fakes and AI-generated content: Content generated/manipulated by AI must be labeled (images, audio, video, text)
Examples:
- Customer service chatbot on website
- AI generating marketing images
- AI voice assistant for appointment booking
- AI creating product descriptions
Checklist:
- [ ] Identify AI systems with transparency obligations
- [ ] ⚙️ Implement disclosure to users (notifications, labels, terms)
- [ ] ⚙️ Label AI-generated content (watermarks, metadata, visible labels)
SignalBreak support:
- ⚙️ Transparency: Organization must implement disclosure; SignalBreak tracks which AI providers are used
- 📋 Documentation: Reports support transparency compliance
4. Minimal-Risk AI
What it is: AI systems that pose minimal or no risk and are not subject to specific obligations (besides general EU law).
Examples:
- AI spam filters
- AI video game opponents
- AI inventory optimization (not high-risk infrastructure)
- AI-powered search and recommendation engines (not manipulative)
Obligations:
- No specific EU AI Act requirements (but GDPR, consumer protection, sector laws still apply)
- Voluntary adherence to codes of conduct encouraged
SignalBreak support:
- ✅ Best practice: Monitor even minimal-risk AI for vendor risk management and operational resilience
Step 3: Obligations for Providers of High-Risk AI
If you are a Provider of high-risk AI (you develop or sell high-risk AI systems), you must comply with the following obligations:
3.1 Risk Management System ⚠️
Requirement (Article 9): Establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle.
What this means:
- Continuous, iterative process to identify, analyze, estimate, and mitigate foreseeable risks
- Considers both intended use and reasonably foreseeable misuse
- Includes risk to health, safety, and fundamental rights
Checklist:
[ ] ⚠️ Risk management system established:
- [ ] Risk identification process (brainstorming, threat modeling, stakeholder input)
- [ ] Risk analysis and evaluation (likelihood, severity, affected populations)
- [ ] Risk mitigation measures (design, testing, documentation, human oversight)
- [ ] Risk acceptance criteria (what level of residual risk is acceptable)
[ ] Risk management throughout lifecycle:
- [ ] Risks assessed during design and development
- [ ] Risks reassessed before market placement
- [ ] Risks monitored post-market (incidents, complaints, performance data)
- [ ] Risks reassessed when AI system updated or context changes
[ ] Documentation maintained:
- [ ] Risk management plan
- [ ] Risk register (identified risks, mitigations, residual risks)
- [ ] Risk assessment reports
- [ ] Evidence of risk mitigation effectiveness
SignalBreak support:
- ✅ Risk identification: Signals automatically detect AI provider risks (outages, model updates, concentration)
- 📋 Risk documentation: Reports provide evidence for risk management system
- ✅ Post-market monitoring: Continuous monitoring supports ongoing risk management
3.2 Data Governance ⚠️
Requirement (Article 10): Training, validation, and testing datasets must be subject to appropriate data governance and management practices.
What this means:
- Data quality ensured (relevant, representative, accurate, complete)
- Bias examined and addressed
- Data provenance documented
- Privacy and security controls applied
Checklist:
[ ] ⚠️ Data governance practices established:
- [ ] Data quality standards defined (accuracy, completeness, representativeness)
- [ ] Data collection and labeling procedures documented
- [ ] Data provenance tracked (sources, acquisition methods, licenses)
- [ ] Data storage and security controls implemented
[ ] Bias examination and mitigation:
- [ ] Training data analyzed for bias (demographic representation, label bias, historical bias)
- [ ] Mitigation applied (rebalancing, debiasing, fairness constraints)
- [ ] Validation and test sets checked for bias
- [ ] Ongoing monitoring for data drift and bias
[ ] Privacy compliance:
- [ ] GDPR compliance verified (lawful basis, consent, data minimization, purpose limitation)
- [ ] Data protection impact assessment (DPIA) conducted if required
- [ ] Data subject rights enabled (access, rectification, erasure)
SignalBreak support:
- ⚙️ Data governance: Organization must implement; SignalBreak tracks AI providers processing data
- 📋 Documentation: Reports show which AI providers process which data (data flow mapping)
3.3 Technical Documentation ⚠️
Requirement (Article 11 & Annex IV): Draw up and maintain technical documentation demonstrating compliance with the EU AI Act.
What this means:
- Comprehensive documentation of AI system design, development, and operation
- Documentation available to authorities upon request
- Documentation kept up to date as AI system changes
Required documentation (Annex IV):
[ ] ⚠️ General description:
- [ ] AI system name, purpose, intended use
- [ ] Provider identity and contact
- [ ] Versions and update history
- [ ] Where AI is available (geographic, use cases)
[ ] Detailed description:
- [ ] AI system architecture and components
- [ ] Algorithms, techniques, and training methodology
- [ ] Data requirements and specifications
- [ ] Computational resources required
- [ ] Integration with other systems or products
[ ] Development process:
- [ ] Design choices and assumptions
- [ ] Development methodology and tools
- [ ] Testing and validation procedures
- [ ] Metrics used to measure performance, accuracy, bias
[ ] Risk management:
- [ ] Risk management plan and outputs
- [ ] Identified risks and mitigation measures
- [ ] Residual risks and justification for acceptance
[ ] Data governance:
- [ ] Training, validation, testing datasets described
- [ ] Data sources, collection methods, labeling procedures
- [ ] Bias examination results and mitigation
- [ ] Data quality metrics
[ ] Human oversight:
- [ ] Human oversight measures implemented
- [ ] Technical specifications for oversight interface
- [ ] Instructions for human overseers
[ ] Performance specifications:
- [ ] Intended purpose and performance metrics
- [ ] Accuracy, robustness, cybersecurity measures
- [ ] Expected lifetime and maintenance needs
SignalBreak support:
- 📋 AI system inventory: Scenarios provide structured documentation of AI systems
- 📋 Provider tracking: Documents which third-party AI providers are used
- 📋 Version control: Logs AI provider model updates (change history)
3.4 Record-Keeping and Logging ⚠️
Requirement (Article 12): High-risk AI systems must have automatic logging capabilities.
What this means:
- AI system automatically logs events during operation
- Logs enable tracing of AI system functioning and investigating incidents
- Logs retained for appropriate period (consider data protection obligations)
What to log:
[ ] ⚠️ Operational logs:
- [ ] Period of use (start/end times)
- [ ] Reference database used (data accessed or processed)
- [ ] Input data (or reference to it, if permissible under data protection law)
- [ ] Persons involved in operation (human oversight, users)
[ ] Incident logs:
- [ ] Errors, malfunctions, and anomalies
- [ ] Unusual outputs or behaviors
- [ ] Overrides by human operators
[ ] Log retention and protection:
- [ ] Retention period defined (balance compliance, investigation, data protection)
- [ ] Logs protected from tampering (integrity, access control)
- [ ] Logs accessible to authorities upon request
SignalBreak support:
- ✅ Logging capability: SignalBreak logs AI provider usage, incidents, and responses (audit trail)
- 📋 Compliance evidence: Logs demonstrate record-keeping compliance
3.5 Transparency and Information for Deployers ⚠️
Requirement (Article 13): High-risk AI systems must be designed to provide transparency and information to Deployers.
What this means:
- Deployers must understand how AI works, what it can/cannot do, and how to use it properly
- Instructions for use provided (user manual)
- System design enables human oversight
Information to provide:
[ ] ⚠️ Instructions for use (user manual):
- [ ] AI system identity (name, version, provider)
- [ ] Intended purpose and use cases
- [ ] Performance characteristics (accuracy, error rates, limitations)
- [ ] Hardware and software requirements
- [ ] Instructions for installation, operation, maintenance
- [ ] Human oversight measures and instructions for overseers
- [ ] Expected lifetime and maintenance/update schedule
[ ] Transparency features:
- [ ] AI system characteristics relevant to Deployer (e.g., explainability of outputs)
- [ ] Changes made by Provider (version updates, model retraining)
- [ ] Incidents or malfunctions experienced post-market
[ ] Contact and support:
- [ ] Provider contact information for Deployer questions and incidents
- [ ] Support channels (technical support, compliance inquiries)
SignalBreak support:
- ⚙️ Instructions: Provider must create; SignalBreak helps Deployers monitor compliance with instructions
- 📋 Change notifications: Alerts when AI providers update models (Deployers informed of changes)
3.6 Human Oversight ⚠️
Requirement (Article 14): High-risk AI systems must be designed to enable effective human oversight.
What this means:
- Humans can intervene, monitor, and override AI decisions
- Human oversight prevents or minimizes risks to health, safety, fundamental rights
- Oversight can be built into AI system or external (organizational measures)
Human oversight measures:
[ ] ⚠️ Oversight capabilities implemented:
- [ ] Human-in-the-loop: Human reviews and approves each AI output before it takes effect
- [ ] Human-on-the-loop: Human monitors AI and can intervene or override if needed
- [ ] Human-in-command: Human can disable or stop AI system at any time
[ ] Technical enablement:
- [ ] AI system provides transparency (explanations, confidence scores, alerts for uncertain outputs)
- [ ] AI system enables intervention (stop button, override mechanism, manual mode)
- [ ] AI system alerts human when oversight needed (low confidence, anomaly detected)
[ ] Organizational measures:
- [ ] Human overseers identified and assigned
- [ ] Overseers trained and competent
- [ ] Overseers have authority and tools to intervene
- [ ] Oversight procedures documented
SignalBreak support:
- ⚙️ Human oversight: Provider/Deployer must implement; SignalBreak supports humans overseeing AI providers
- 📋 Audit trail: Logs show when humans intervened (e.g., switched providers, paused AI)
3.7 Accuracy, Robustness, and Cybersecurity ⚠️
Requirement (Article 15): High-risk AI systems must achieve appropriate accuracy, robustness, and cybersecurity.
What this means:
- AI performs as intended with acceptable error rates
- AI resilient to errors, faults, and attempts to manipulate (adversarial attacks)
- AI protected from cybersecurity threats
Checklist:
[ ] ⚠️ Accuracy requirements defined and met:
- [ ] Accuracy metrics chosen (precision, recall, F1, AUC, or domain-specific)
- [ ] Acceptable thresholds defined (based on risk, state of the art, user expectations)
- [ ] Accuracy tested during development and deployment
- [ ] Accuracy monitored continuously post-market
[ ] Robustness ensured:
- [ ] AI tested for resilience to input variations, edge cases, and anomalies
- [ ] AI tested against adversarial attacks (if applicable)
- [ ] Fallback mechanisms implemented (graceful degradation, fail-safe)
- [ ] Error handling and logging
[ ] Cybersecurity measures implemented:
- [ ] AI system and training data protected from unauthorized access
- [ ] Secure software development practices applied
- [ ] Vulnerability management (patching, scanning, threat intelligence)
- [ ] Incident response for cybersecurity breaches
SignalBreak support:
- ✅ Robustness monitoring: Detect AI provider outages and degradation (robustness issues)
- ✅ Security monitoring: Track AI provider security incidents
- 📋 Documentation: Evidence of monitoring and incident response
3.8 Conformity Assessment ⚠️
Requirement (Articles 43-44 & Annex VI-VII): High-risk AI systems must undergo conformity assessment before market placement.
What this means:
- Provider demonstrates that AI system meets EU AI Act requirements
- Conformity assessment can be self-assessment (internal control) or third-party assessment (notified body), depending on AI category
- Successful assessment leads to CE marking and EU declaration of conformity
Assessment procedures:
[ ] ⚠️ Determine applicable procedure:
- [ ] Self-assessment (Annex VI): For most high-risk AI (quality management system + technical documentation)
- [ ] Third-party assessment (Annex VII): For high-risk AI in products covered by existing EU legislation requiring third-party conformity assessment (e.g., medical devices, machinery, toys)
[ ] Conduct assessment:
- [ ] Quality management system established (see 3.10 below)
- [ ] Technical documentation prepared (see 3.3 above)
- [ ] EU declaration of conformity drawn up (statement that AI complies)
- [ ] CE marking affixed to AI system or packaging
[ ] Assessment by notified body (if applicable):
- [ ] Engage accredited notified body
- [ ] Submit technical documentation and application
- [ ] Notified body examines compliance
- [ ] Certificate issued if compliant
SignalBreak support:
- 📋 Technical documentation: Evidence packs provide documentation for conformity assessment
- 📋 Compliance evidence: Logs and reports demonstrate risk management, monitoring, incident response
3.9 Registration ⚠️
Requirement (Article 49): Providers of high-risk AI systems must register them in the EU database.
What this means:
- Before placing high-risk AI on the market, Provider registers it in publicly accessible EU database
- Registration includes information about AI system, Provider, and conformity
- Database maintained by European Commission and accessible to public
Registration information:
- [ ] ⚠️ Register in EU database before market placement:
- [ ] Provider name and contact
- [ ] AI system name, trade name, and description
- [ ] Intended purpose
- [ ] High-risk category (Annex III classification)
- [ ] Status (on market, withdrawn, recalled)
- [ ] EU declaration of conformity
- [ ] Instructions for use (link or upload)
- [ ] Contact for information requests
SignalBreak support:
- 📋 AI inventory: Provides structured data for EU database registration
- ⚙️ Registration: Provider must complete EU database entry (SignalBreak provides supporting data)
3.10 Quality Management System ⚠️
Requirement (Article 17): Providers of high-risk AI systems must implement a quality management system.
What this means:
- Systematic approach to ensure compliance throughout AI system lifecycle
- Documented policies, procedures, and processes
- Continuous improvement based on monitoring and feedback
Quality management system components:
[ ] ⚠️ QMS established and documented:
- [ ] Compliance strategy (how compliance will be achieved and maintained)
- [ ] Design and development procedures (ensure requirements met)
- [ ] Testing, validation, and verification procedures
- [ ] Post-market monitoring procedures
- [ ] Incident reporting and corrective action procedures
- [ ] Documentation and record-keeping procedures
[ ] QMS implemented and maintained:
- [ ] Policies and procedures communicated to team
- [ ] Staff trained on QMS requirements
- [ ] Compliance monitored and verified (internal audits)
- [ ] Non-conformities addressed with corrective actions
- [ ] Management reviews conducted periodically
SignalBreak support:
- 📋 QMS evidence: Logs and reports demonstrate post-market monitoring and incident management
- ✅ Continuous monitoring: Supports post-market monitoring requirements
3.11 Post-Market Monitoring ⚠️
Requirement (Article 72): Providers must establish and document a post-market monitoring system.
What this means:
- Continuously monitor AI system performance in real-world use
- Collect data on incidents, malfunctions, and user feedback
- Analyze data to identify safety issues, emerging risks, and improvement opportunities
Post-market monitoring system:
[ ] ⚠️ Monitoring plan established:
- [ ] Data sources identified (user reports, logs, incidents, complaints, public sources)
- [ ] Monitoring methods defined (automated monitoring, surveys, audits)
- [ ] Frequency of monitoring (continuous, periodic)
- [ ] Responsibilities assigned
[ ] Monitoring implemented:
- [ ] Data collected and analyzed
- [ ] Trends and patterns identified
- [ ] Risks and incidents escalated
- [ ] Corrective actions taken when needed
[ ] Monitoring documented:
- [ ] Post-market monitoring reports
- [ ] Incident reports and investigations
- [ ] Corrective action plans and results
SignalBreak support:
- ✅ Post-market monitoring: Continuous signal detection supports monitoring obligations
- 📋 Incident tracking: Signal logs provide evidence of post-market monitoring
- ✅ Alerting: Immediate notification of AI incidents
3.12 Serious Incident Reporting ⚠️
Requirement (Article 73): Providers must report serious incidents to market surveillance authorities.
What this means:
- If high-risk AI causes or contributes to death, serious injury, serious health issue, or serious fundamental rights disruption, Provider must report immediately
- Report within 15 days of becoming aware
- Failure to report incurs heavy fines
Incident reporting:
[ ] ⚠️ Serious incident definition understood:
- [ ] Death or serious injury to health or property
- [ ] Serious and irreversible disruption of critical infrastructure
- [ ] Serious breach of fundamental rights
[ ] Incident reporting process established:
- [ ] Incident detection and escalation procedures
- [ ] Responsibility for determining if incident is "serious"
- [ ] Reporting channel to market surveillance authority (national competent authority)
- [ ] Timeline for reporting (immediately upon awareness, formal report within 15 days)
[ ] Incident reports include:
- [ ] AI system identification
- [ ] Description of incident (what happened, when, where, impact)
- [ ] Root cause analysis (if known)
- [ ] Corrective actions taken or planned
- [ ] Contact information for follow-up
SignalBreak support:
- ✅ Incident detection: Signals alert to potential serious incidents involving AI providers
- 📋 Incident documentation: Logs support incident reporting with timeline and details
- ✅ Escalation: Critical signals ensure urgent incidents are escalated immediately
Step 4: Obligations for Deployers of High-Risk AI
If you are a Deployer of high-risk AI (you use high-risk AI systems developed by others), you must comply with the following obligations:
4.1 Use AI in Accordance with Instructions ⚠️
Requirement (Article 26): Deployers must use high-risk AI systems in accordance with instructions provided by the Provider.
What this means:
- Read and understand the instructions for use (user manual)
- Use AI only for intended purpose and within specified conditions
- Comply with Provider's requirements for human oversight, data inputs, maintenance
Checklist:
[ ] ⚠️ Instructions reviewed and understood:
- [ ] Instructions for use obtained from Provider
- [ ] Intended purpose and limitations understood
- [ ] Technical requirements verified (hardware, software, data)
- [ ] Human oversight requirements understood
[ ] AI used as instructed:
- [ ] AI deployed only for intended purpose
- [ ] Operating conditions maintained (environment, data quality, user training)
- [ ] Human oversight implemented as required
- [ ] Maintenance and updates applied as instructed
[ ] Non-compliance documented:
- [ ] If AI used outside instructions, justification documented and risks assessed
- [ ] Deployer may become Provider if substantial modifications made
SignalBreak support:
- ⚙️ Compliance: Deployer must follow instructions; SignalBreak monitors AI provider performance
- 📋 Documentation: Logs show AI usage patterns (evidence of compliant use)
4.2 Human Oversight ⚠️
Requirement (Article 26): Deployers must assign human oversight to natural persons who are competent, trained, and have authority.
What this means:
- Deployer designates specific people to oversee high-risk AI
- Overseers trained and competent
- Overseers can intervene, override, or stop AI when needed
Checklist:
[ ] ⚠️ Human overseers assigned:
- [ ] Specific persons identified for each high-risk AI system
- [ ] Roles and responsibilities documented
- [ ] Authority to intervene granted
[ ] Overseers trained and competent:
- [ ] Training on AI system purpose, capabilities, limitations
- [ ] Training on human oversight procedures (when to intervene, how to override)
- [ ] Competence verified and documented
[ ] Oversight procedures implemented:
- [ ] Monitoring procedures (how often, what to check)
- [ ] Intervention procedures (when to override, escalation)
- [ ] Documentation of oversight activities
SignalBreak support:
- ⚙️ Oversight: Deployer must implement; SignalBreak supports overseers monitoring AI providers
- 📋 Audit trail: Logs show when overseers intervened (e.g., escalated incident, switched providers)
4.3 Input Data Monitoring
Requirement (Article 26): Deployers must monitor input data to ensure it is relevant and sufficiently representative.
What this means:
- Input data quality affects AI performance and fairness
- Deployer checks that data fed into AI is appropriate
- Deployer addresses data quality issues (missing data, outliers, bias)
Checklist:
[ ] Input data requirements understood:
- [ ] Data requirements documented by Provider (instructions for use)
- [ ] Data quality standards (accuracy, completeness, freshness, representativeness)
[ ] Data monitoring implemented:
- [ ] Data quality checks (automated or manual)
- [ ] Data drift detection (changes in data distribution over time)
- [ ] Bias monitoring (ensure data represents relevant populations)
[ ] Data issues addressed:
- [ ] Data quality issues escalated and remediated
- [ ] AI usage paused if data quality inadequate
- [ ] Provider informed of persistent data quality issues
SignalBreak support:
- ⚙️ Data monitoring: Deployer must implement; SignalBreak monitors AI provider (not input data quality)
- 📋 Incident tracking: Log AI performance issues potentially caused by data quality
4.4 Logging and Record-Keeping
Requirement (Article 26): Deployers must keep logs generated by high-risk AI systems (to extent under Deployer's control).
What this means:
- High-risk AI automatically generates logs (see 3.4 above)
- Deployer retains logs for appropriate period
- Logs available for incident investigation, audits, and authorities
Checklist:
[ ] Logs retained:
- [ ] Log retention period defined (consider EU AI Act, data protection, sector requirements)
- [ ] Logs stored securely (integrity, confidentiality)
- [ ] Logs accessible for investigation and audit
[ ] Logs reviewed:
- [ ] Periodic log review (identify anomalies, trends, incidents)
- [ ] Logs analyzed after incidents
- [ ] Findings inform risk management and improvements
SignalBreak support:
- ✅ Logging: SignalBreak logs AI provider usage, incidents, and responses (audit trail)
- 📋 Compliance evidence: Logs support record-keeping obligations
4.5 Incident Reporting to Provider and Authorities ⚠️
Requirement (Article 26): Deployers must inform Provider and market surveillance authority of serious incidents.
What this means:
- If Deployer becomes aware of serious incident involving high-risk AI, report it
- Report to Provider immediately (so Provider can investigate and report to authorities)
- Report to national authority if Provider non-responsive or incident involves Provider negligence
Checklist:
[ ] ⚠️ Incident reporting process established:
- [ ] Procedure to detect and escalate serious incidents
- [ ] Responsibility for determining if incident is "serious"
- [ ] Contact information for Provider incident reporting
- [ ] Contact information for national market surveillance authority
[ ] Incidents reported:
- [ ] Provider notified immediately of serious incidents
- [ ] Authority notified if Provider unresponsive or incident involves Provider
- [ ] Incident details documented (see 3.12 for required information)
SignalBreak support:
- ✅ Incident detection: Alerts notify Deployer of potential serious incidents
- 📋 Incident documentation: Logs support incident reporting with timeline and details
- ✅ Escalation: Critical signals ensure urgent incidents escalated to Provider/authorities
4.6 Data Protection Impact Assessment (DPIA)
Requirement (Article 27): Before deploying high-risk AI, Deployers must conduct a DPIA if required under GDPR.
What this means:
- High-risk AI often involves processing personal data at scale or for sensitive purposes
- GDPR requires DPIA for high-risk data processing
- EU AI Act explicitly reminds Deployers of this GDPR obligation
Checklist:
[ ] DPIA requirement assessed:
- [ ] Determine if AI processes personal data
- [ ] Determine if processing is "high-risk" under GDPR (automated decision-making, large-scale sensitive data, systematic monitoring, vulnerable populations)
- [ ] Consult GDPR Article 35 and national data protection authority guidance
[ ] DPIA conducted if required:
- [ ] Description of processing and purposes
- [ ] Assessment of necessity and proportionality
- [ ] Risks to data subjects (privacy, rights)
- [ ] Mitigation measures
- [ ] Data protection officer consulted (if applicable)
- [ ] Supervisory authority consulted if residual high risk
SignalBreak support:
- ⚙️ DPIA: Deployer must conduct; SignalBreak documents AI providers used (input to DPIA)
- 📋 Data flow mapping: Reports show which AI providers process personal data
4.7 Transparency to Affected Persons
Requirement (Article 26): Deployers must inform natural persons that they are subject to use of high-risk AI.
What this means:
- People affected by high-risk AI decisions have a right to know
- Deployers must disclose AI use in clear, accessible manner
- Disclosure timing: before or at the time AI is used
Checklist:
[ ] Disclosure implemented:
- [ ] Identify contexts where disclosure required (job applications, loan applications, etc.)
- [ ] Disclosure method chosen (application forms, notices, terms and conditions, privacy notices)
- [ ] Disclosure content: AI use, purpose, how to exercise rights (human review, appeal)
[ ] Disclosure communicated:
- [ ] Disclosure provided before or during AI use
- [ ] Disclosure clear and accessible (plain language, multiple languages if needed)
- [ ] Disclosure documented (evidence of compliance)
[ ] Rights enabled:
- [ ] Affected persons can request human review of AI decisions
- [ ] Appeal or reconsideration mechanisms available
- [ ] Explanation of AI decision provided if requested
SignalBreak support:
- ⚙️ Transparency: Deployer must implement disclosure; SignalBreak documents AI providers used (input to disclosure)
- 📋 Audit trail: Logs support demonstration of transparency compliance
Step 5: Obligations for General-Purpose AI (GPAI) Providers
If you develop or provide General-Purpose AI models (e.g., OpenAI GPT-4, Anthropic Claude, Google Gemini), you have specific obligations under the EU AI Act.
What is General-Purpose AI?
Definition (Article 3): AI model trained on large amounts of data, capable of performing a wide range of tasks, and can be integrated into various AI systems.
Examples:
- Large language models (OpenAI GPT-4, Anthropic Claude, Google Gemini, Meta Llama)
- Multimodal models (GPT-4 Vision, Gemini Pro Vision)
- Foundation models for image generation (Stable Diffusion, DALL-E, Midjourney)
Two tiers of GPAI obligations:
- Standard GPAI: All general-purpose AI models
- GPAI with systemic risk: Models with high impact capabilities (e.g., >10^25 FLOPs training compute)
5.1 Obligations for All GPAI Providers
Requirements (Article 53):
[ ] Technical documentation:
- [ ] Model architecture, parameters, training data summary
- [ ] Capabilities, limitations, and performance metrics
- [ ] Energy consumption during training (if measurable)
[ ] Information to downstream providers:
- [ ] How to integrate model into AI systems
- [ ] Instructions for use
- [ ] Known risks and mitigation recommendations
[ ] Copyright compliance:
- [ ] Policy for compliance with EU copyright law
- [ ] Summary of training data sources (sufficiently detailed for copyright assessment)
SignalBreak relevance:
- 📋 GPAI monitoring: Organizations using GPAI (OpenAI, Anthropic, etc.) can use SignalBreak to monitor GPAI provider performance and compliance
5.2 Additional Obligations for GPAI with Systemic Risk
When applicable: GPAI models with high impact capabilities (presumed if >10^25 FLOPs, or explicitly designated by AI Office based on capability assessments).
Requirements (Article 55):
[ ] ⚠️ Model evaluation:
- [ ] Adversarial testing (red-teaming) for risks
- [ ] Evaluation of systemic risks (security, fundamental rights impacts)
[ ] Risk mitigation:
- [ ] Implement measures to mitigate systemic risks
- [ ] Cybersecurity protections (model security, training data integrity)
[ ] Serious incident reporting:
- [ ] Report serious incidents to AI Office
- [ ] Timeline: Immediately upon awareness, formal report within 15 days
[ ] Transparency:
- [ ] Publicly accessible summary of model capabilities, limitations, risks
- [ ] Information for downstream providers about systemic risk mitigation
SignalBreak relevance:
- ✅ GPAI monitoring: Organizations using high-risk GPAI can monitor for incidents and model updates via SignalBreak
Step 6: Prepare for Enforcement
Enforcement Authorities
- European AI Office: Central EU authority for GPAI and cross-border enforcement
- National competent authorities: Each EU member state designates authorities for market surveillance
- Notified bodies: Accredited organizations conducting third-party conformity assessments (for certain high-risk AI)
Penalties ⚠️
Fines for non-compliance (Article 99):
| Infringement | Maximum Fine |
|---|---|
| Prohibited AI | €35 million or 7% of global annual turnover (whichever higher) |
| Non-compliance with high-risk AI obligations | €15 million or 3% of global annual turnover |
| Inaccurate, incomplete, or misleading information to authorities | €7.5 million or 1.5% of global annual turnover |
Factors affecting fine amount:
- Nature, gravity, and duration of infringement
- Intentional or negligent character
- Actions taken to mitigate damage
- Degree of cooperation with authorities
- Previous infringements
- SME status (fines may be lower for SMEs)
Compliance Checklist for Enforcement Readiness
[ ] Documentation prepared:
- [ ] Technical documentation for high-risk AI (Annex IV)
- [ ] EU declaration of conformity
- [ ] Instructions for use
- [ ] Post-market monitoring reports
- [ ] Incident reports and corrective actions
- [ ] Quality management system documentation
[ ] Internal audit conducted:
- [ ] Self-assessment against EU AI Act requirements
- [ ] Gaps identified and remediated
- [ ] Audit report available for authorities
[ ] Stakeholder communication:
- [ ] Deployers informed of AI Act obligations
- [ ] Affected persons disclosed about AI use
- [ ] Authorities notified of serious incidents
[ ] Legal counsel engaged:
- [ ] Legal review of compliance status
- [ ] Strategy for responding to authority inquiries
- [ ] Incident response plan includes legal escalation
SignalBreak support:
- 📋 Enforcement evidence: Evidence packs compile documentation for authority requests
- ✅ Continuous compliance: Monitoring and logging demonstrate ongoing compliance efforts
Implementation Roadmap
Phase 1: Assessment (Months 1-3)
Goal: Understand obligations and current compliance status.
Actions:
- Determine your role (Provider, Deployer, both)
- Create AI inventory and classify by risk level
- Identify high-risk AI systems requiring compliance
- Conduct gap analysis against EU AI Act requirements
- Deploy SignalBreak for AI monitoring
Deliverables:
- AI inventory with risk classifications
- Gap analysis report
- Compliance roadmap and budget
- SignalBreak monitoring configured
Phase 2: Compliance Implementation (Months 4-12)
Goal: Implement required controls and documentation for high-risk AI.
Actions (Providers):
- Establish risk management system
- Implement data governance practices
- Prepare technical documentation (Annex IV)
- Implement logging and record-keeping
- Establish quality management system
- Conduct conformity assessment (self or third-party)
- Register in EU database
- Establish post-market monitoring and incident reporting
Actions (Deployers):
- Review instructions for use from Providers
- Assign and train human overseers
- Implement data monitoring
- Establish logging and record-keeping
- Conduct DPIA if required
- Implement transparency disclosures
- Establish incident reporting to Providers/authorities
Deliverables:
- Risk management documentation
- Technical documentation
- CE marking and EU declaration of conformity (Providers)
- EU database registration (Providers)
- Human oversight procedures (Deployers)
- Transparency disclosures (Deployers)
Phase 3: Monitoring and Maintenance (Months 13+)
Goal: Maintain compliance through continuous monitoring and improvement.
Actions:
- Monitor AI systems continuously via SignalBreak
- Conduct periodic audits (internal and external)
- Report serious incidents to authorities
- Update documentation when AI systems change
- Respond to authority inquiries and inspections
- Continuous improvement based on lessons learned
Deliverables:
- Post-market monitoring reports
- Incident reports and corrective actions
- Updated technical documentation
- Audit reports
- Continuous compliance evidence via SignalBreak
SignalBreak Features Supporting EU AI Act Compliance
| EU AI Act Requirement | SignalBreak Feature | How It Helps |
|---|---|---|
| AI inventory (all roles) | Scenarios | Automatic, continuously updated inventory of AI systems |
| Risk classification | Scenario tags | Classify and filter AI by risk level (prohibited, high, limited, minimal) |
| Technical documentation (Provider) | Evidence packs & audit trails | Compile documentation for Annex IV compliance |
| Logging and record-keeping (Provider, Deployer) | Signal logs | Complete audit trail of AI usage, incidents, and responses |
| Post-market monitoring (Provider) | Continuous monitoring & alerts | 24/7 monitoring of AI systems and third-party providers |
| Serious incident reporting (Provider, Deployer) | Critical signal alerts | Immediate notification of potential serious incidents |
| Human oversight (Provider, Deployer) | Alert routing & dashboards | Support human overseers monitoring AI |
| Transparency (Deployer) | AI inventory & provider tracking | Document which AI providers are used (input to transparency disclosures) |
| GPAI monitoring (GPAI users) | Provider monitoring | Track GPAI provider (OpenAI, Anthropic, etc.) performance and incidents |
| Enforcement readiness (all roles) | Evidence packs | Compile documentation for authority requests and audits |
Next Steps
- Determine your role: Are you a Provider, Deployer, or both under the EU AI Act?
- Classify your AI systems: Use this checklist to classify AI by risk level
- Assess compliance gaps: Identify which obligations are not yet met
- Deploy SignalBreak: Start AI monitoring and documentation immediately
- Implement controls: Use this checklist to guide compliance implementation
- Engage legal counsel: Ensure compliance strategy is sound and enforceable
- Track timeline: Key deadlines approaching (prohibited AI: Feb 2025, high-risk AI: Aug 2026)
Related Documentation
- ISO 42001 Checklist - AI management system certification
- NIST AI RMF Checklist - U.S. AI risk management framework
- Government AI Governance - Public sector AI governance
- Financial Services AI Governance - Banking AI compliance
- Healthcare AI Governance - Clinical AI regulation
Support
- Documentation: Help Center
- Email: support@signal-break.com
- EU AI Act consulting: euaiact@signal-break.com