EU Artificial Intelligence Act Guide
What is the EU AI Act?
The EU Artificial Intelligence Act (AI Act) is the world's first comprehensive legal framework for artificial intelligence, establishing harmonized rules for the development, marketing, and use of AI systems in the European Union.
Official Name: Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence
Published: August 1, 2024
Enacted By: European Parliament and Council of the European Union
Framework Type: Mandatory legal regulation (not voluntary)
Geographic Scope:
- All AI systems placed on the EU market or put into service in the EU
- AI system outputs used in the EU (extraterritorial effect)
- Applies regardless of where the provider/deployer is established
Why the EU AI Act Matters for AI Governance
1. Mandatory Compliance (Not Voluntary)
Unlike NIST AI RMF or ISO 42001, the EU AI Act is legally binding:
| Framework | Type | Penalties for Non-Compliance |
|---|---|---|
| EU AI Act | Legal regulation | Up to €35 million or 7% of global annual turnover (whichever is higher) |
| ISO 42001 | Voluntary standard | No legal penalties (reputational risk only) |
| NIST AI RMF | Voluntary framework | No legal penalties (federal procurement risk only) |
This matters because:
- Criminal liability for prohibited AI uses (Article 5)
- Administrative fines up to €35M for high-risk system violations
- Directors' personal liability in some member states
- Market surveillance authorities can ban non-compliant AI systems
Penalties by Violation Type:
| Violation | Maximum Fine | Example |
|---|---|---|
| Prohibited AI (Article 5) | €35M or 7% turnover | Social scoring, subliminal manipulation |
| High-Risk non-compliance (Articles 9-15) | €15M or 3% turnover | Inadequate risk management, missing documentation |
| Information obligations (Articles 49-54) | €7.5M or 1.5% turnover | Failure to report serious incidents |
2. Risk-Based Regulatory Approach
The EU AI Act categorizes AI systems into 4 risk levels with escalating obligations:
Prohibited AI → High-Risk → Limited-Risk → Minimal-Risk
(Banned) (Strict) (Transparency) (No obligations)Risk-Based Philosophy:
- Higher risk = stricter requirements
- Most AI systems fall into Limited/Minimal (light-touch)
- Only ~5-10% of AI systems are High-Risk (heavy regulation)
Comparison to Other Frameworks:
| Framework | Risk Approach | Obligations |
|---|---|---|
| EU AI Act | 4-tier (Prohibited/High/Limited/Minimal) | Tier-specific legal requirements |
| NIST AI RMF | Continuous risk scale | Organization defines risk tolerance |
| ISO 42001 | Risk-based management system | All systems managed proportionally |
3. Extraterritorial Application
The EU AI Act applies beyond EU borders:
You're covered if:
- ✅ You provide AI systems placed on the EU market
- ✅ You deploy AI systems in the EU (regardless of where you're based)
- ✅ Your AI outputs are used in the EU (even if system hosted elsewhere)
Example Scenarios:
| Scenario | Covered by EU AI Act? | Why? |
|---|---|---|
| US company provides AI chatbot to EU customers | ✅ Yes | System placed on EU market |
| UK company uses AI recruitment tool for UK staff | ❌ No (but may be if UK-based candidates apply to EU jobs) | No EU nexus |
| Indian company's AI content moderation used by EU social media platform | ✅ Yes | AI output used in EU |
| Japanese company develops AI for robots sold globally including EU | ✅ Yes | Placed on EU market |
Practical Impact:
- Non-EU providers must comply if they serve EU customers
- Providers' responsibilities extend to distributors, importers, deployers
- Third-party liability: Using non-compliant AI can expose deployers to fines
4. Phased Implementation Timeline
The EU AI Act has staggered effective dates (not all at once):
| Effective Date | What Takes Effect | Affected Parties |
|---|---|---|
| February 2, 2025 | Prohibited AI practices (Article 5) | All providers/deployers |
| August 2, 2025 | General Purpose AI (GPAI) obligations (Article 51-56) | Foundation model providers (OpenAI, Anthropic, etc.) |
| August 2, 2026 | High-risk AI system requirements (Titles III-IV) | High-risk system providers/deployers |
| August 2, 2027 | Full regulation in force | All provisions applicable |
Grace Periods:
- 6 months from Feb 2025: Prohibited AI ban
- 24 months from Feb 2025: High-risk systems compliance
- 36 months from Feb 2025: Full enforcement
Current Status (January 2026):
- 🔴 Prohibited AI ban: ACTIVE (6-month grace period ended Aug 2, 2025)
- 🟠 GPAI obligations: IMMINENT (Aug 2, 2025)
- 🟡 High-risk requirements: 7 months until deadline (Aug 2, 2026)
How SignalBreak Maps to EU AI Act
SignalBreak provides automated evidence generation for key EU AI Act articles. Your workflows, scenarios, and provider monitoring support compliance with:
Article Coverage Summary
| EU AI Act Provision | SignalBreak Evidence | Status |
|---|---|---|
| Article 5: Prohibited AI | Workflow audit (no prohibited use cases detected) | 🟢 Compliant |
| Article 6: High-Risk Classification | Workflow categorization (requires manual risk classification) | 🔴 Gap |
| Article 9: Risk Management | Scenario analysis, risk scoring | 🟡 Partial |
| Article 10: Data Governance | Provider data policies (2/4 providers complete) | 🟡 Partial |
| Article 11: Technical Documentation | Workflow descriptions, provider bindings | 🟢 Compliant |
| Article 12: Record-Keeping | Audit logs (7+ entries/month) | 🟢 Compliant |
| Article 13: Transparency | Chatbot workflow identification | 🟢 Compliant |
| Article 14: Human Oversight | Human-in-loop flags (3/6 workflows) | 🟡 Partial |
| Article 52: User Disclosure | Chatbot transparency (requires implementation) | 🟡 Partial |
| Article 72: Post-Market Monitoring | Provider health tracking (5,000+ signals) | 🟢 Compliant |
Overall Readiness: Typically 50-70% for organizations with SignalBreak (varies by risk classification)
EU AI Act Risk Classification
The 4-Tier System
Tier 1: Prohibited AI (Article 5) — BANNED
Definition: AI systems that pose unacceptable risks to fundamental rights and safety.
Prohibited Practices:
| Prohibited Use | Example | Penalty |
|---|---|---|
| Subliminal manipulation | AI that manipulates behavior to cause harm | €35M or 7% turnover |
| Social scoring by authorities | Government assigns social credit scores | €35M or 7% turnover |
| Real-time biometric identification (public spaces) | Live facial recognition for mass surveillance | €35M or 7% turnover |
| Exploiting vulnerabilities | AI targeting children's psychological weaknesses | €35M or 7% turnover |
SignalBreak Evidence: ✅ Compliant — Workflow audit shows no prohibited AI use cases in typical implementations.
Self-Certification: Review all workflows against Article 5 prohibited practices. If none apply, document compliance.
Tier 2: High-Risk AI (Articles 6-29) — STRICT REGULATION
Definition: AI systems listed in Annex III or used as safety components of regulated products.
Annex III High-Risk Categories:
| Category | Example Use Cases | Typical Workflows |
|---|---|---|
| Biometric identification | Facial recognition for access control | Security access workflows |
| Critical infrastructure | AI managing energy grids, water systems | Industrial control workflows |
| Education/training | AI grading, student assessment | Educational AI systems |
| Employment | AI recruitment, performance evaluation | HR AI workflows |
| Essential services | AI credit scoring, benefit eligibility | Financial/government services |
| Law enforcement | Predictive policing, crime risk assessment | Police/judicial AI |
| Migration/asylum | AI visa decisions, border control | Immigration systems |
| Justice | AI legal research affecting case outcomes | Legal tech AI |
High-Risk Obligations (if applicable):
| Obligation | Article | Description |
|---|---|---|
| Risk management system | Art 9 | Continuous risk assessment + mitigation |
| Data governance | Art 10 | Training data quality, provenance, bias checks |
| Technical documentation | Art 11 | System specifications, datasets, test results |
| Record-keeping | Art 12 | Automated logs for audit (min 6 months) |
| Transparency | Art 13 | User disclosure of AI use |
| Human oversight | Art 14 | Human-in-loop for high-risk decisions |
| Accuracy/robustness | Art 15 | Performance metrics, cybersecurity |
SignalBreak Evidence for High-Risk: 🔴 Critical gap — Most organizations using SignalBreak haven't formally classified workflows as High-Risk (Article 6 violation).
Action Required:
- Review all workflows against Annex III categories
- Document High-Risk classification (or justify why not High-Risk)
- Implement additional obligations if High-Risk identified
Tier 3: Limited-Risk AI (Article 52) — TRANSPARENCY ONLY
Definition: AI systems that interact with humans or generate/manipulate content.
Limited-Risk Categories:
| Type | Example | Transparency Requirement |
|---|---|---|
| Chatbots | Customer service AI, virtual assistants | Disclose AI use unless obvious |
| Emotion recognition | AI detecting user emotions | Inform users before use |
| Biometric categorization | AI inferring demographics from photos | User notification required |
| Deepfakes | AI-generated images/video/audio | Watermark + disclosure |
Limited-Risk Obligations:
- User disclosure: "You are interacting with an AI system"
- Watermarking: For AI-generated content (images, video, audio)
- Detect/label: AI-generated content from your systems
SignalBreak Evidence: 🟡 Partial — Chatbot workflows identified (2/6 typical implementations), but disclosure mechanisms not fully implemented.
Action Required: Implement user disclosure for chatbot workflows:
- Front-end banner: "This conversation is powered by AI"
- Terms of service: Mention AI use
- Opt-out mechanism (where feasible)
Tier 4: Minimal-Risk AI — NO OBLIGATIONS
Definition: All other AI systems not in Prohibited/High/Limited categories.
Examples:
- AI spam filters
- AI inventory management
- AI recommendation engines (non-manipulative)
- AI translation tools
- AI data analysis (internal use)
Obligations: ✅ None — Minimal-risk AI can be developed and deployed without EU AI Act compliance (but GDPR, sector laws still apply).
SignalBreak Evidence: ✅ Compliant — Most workflows (4-5 of 6) typically fall into Minimal-Risk category.
Best Practice: Document why each workflow is Minimal-Risk (proves you've conducted risk assessment per Article 6).
EU AI Act Article-by-Article Guide
Title II: Prohibited AI Practices
Article 5: Prohibited Artificial Intelligence Practices
Requirement: Certain AI practices are completely banned due to unacceptable risks to fundamental rights.
Prohibited Practices (Detailed):
5.1(a): Subliminal Manipulation
- AI that deploys subliminal techniques to materially distort behavior
- Causing physical/psychological harm
- Example: Hidden audio cues in AI-generated ads to manipulate purchasing
5.1(b): Exploiting Vulnerabilities
- AI targeting vulnerable groups (children, disabled, elderly)
- Example: AI chatbot designed to extract money from elderly users
5.1(c): Social Scoring by Authorities
- Public authority assigns social scores based on AI analysis
- Score affects access to services/benefits
- Example: Government AI scoring citizens' trustworthiness
5.1(d): Real-Time Biometric Identification in Public Spaces
- Live facial recognition in publicly accessible spaces
- Exceptions: Missing persons, prevent terrorist attacks (judicial authorization required)
- Example: Retail store using live facial recognition without consent
SignalBreak Evidence: ✅ Compliant — No workflows match prohibited practices.
Self-Certification Checklist:
| Question | Your Answer | If "Yes" → Violation |
|---|---|---|
| Does any AI manipulate users below conscious awareness? | ❌ No | |
| Does any AI target vulnerable groups' weaknesses? | ❌ No | |
| Does any AI assign social scores affecting rights? | ❌ No | |
| Does any AI perform real-time biometric identification in public? | ❌ No |
If all "No" → Compliant with Article 5
Title III: High-Risk AI Systems
Article 6: Classification as High-Risk
Requirement: Determine whether each AI system is High-Risk based on Annex III categories or product safety laws.
Classification Methodology:
Step 1: Check if AI system is listed in Annex III
- See "Annex III High-Risk Categories" table above
- If listed → High-Risk (unless exception applies)
Step 2: Check if AI is a safety component of regulated product
- Medical devices (MDR, IVDR)
- Machinery (Machinery Regulation)
- Toys, aviation, automotive, etc.
- If safety-critical → High-Risk
Step 3: Apply exceptions (Article 6.3)
- If AI performs narrow procedural task (data formatting, not decision-making) → Not High-Risk
- If AI improves human decisions (not replacing) and low-risk → Not High-Risk
SignalBreak Evidence: 🔴 Non-Compliant — 0 of 6 workflows have formal High-Risk classification in typical implementations.
Gap: 100% of systems missing mandatory risk classification.
Action Required:Within 30 days:
- Create Risk Classification Register:
| Workflow | Annex III Category? | Safety Component? | Exception Applies? | Classification | Justification |
|---|---|---|---|---|---|
| Customer Support Chatbot | ❌ No | ❌ No | N/A | Minimal-Risk | Internal customer service, no decision-making affecting rights |
| Email Classifier | ❌ No | ❌ No | N/A | Minimal-Risk | Data routing, no fundamental rights impact |
| Recruitment Screening | ✅ Yes (Employment) | ❌ No | ❌ No (replaces human screening) | High-Risk | Affects employment decisions per Annex III |
| Credit Scoring | ✅ Yes (Essential Services) | ❌ No | ❌ No (determines credit eligibility) | High-Risk | Affects access to financial services |
| Code Review Agent | ❌ No | ❌ No | N/A | Minimal-Risk | Assists developers, no critical decisions |
| Internal Chatbot | ❌ No | ❌ No | N/A | Minimal-Risk | Internal use, no external impact |
- Document justification for each classification
- Implement High-Risk obligations for any identified High-Risk systems (Articles 9-15)
Penalty for non-classification: €15M or 3% of global turnover (failure to comply with High-Risk obligations)
Article 9: Risk Management System
Requirement: High-Risk AI systems must have a continuous risk management process throughout their lifecycle.
Risk Management Obligations:
9.2(a): Risk Identification
- Identify known/foreseeable risks
- Includes risks to health, safety, fundamental rights
- Example: Bias in recruitment AI discriminating against protected groups
9.2(b): Risk Estimation and Evaluation
- Assess likelihood and severity
- Use documented methodology
- Example: Probability of false positive × Impact on candidate
9.2(c): Risk Mitigation
- Eliminate or reduce risks to acceptable level
- Implement safeguards (human oversight, fallbacks)
- Example: Add human review for borderline candidates
9.2(d): Residual Risk Assessment
- Evaluate remaining risks after mitigation
- Provide information to deployers
- Example: Disclose known failure modes to HR team
SignalBreak Evidence: 🟡 Partial — 6 of 6 workflows have risk assessment via criticality levels (Critical, High, Medium, Low), but High-Risk AI-specific risk management incomplete.
Gap:
- Risk mitigation controls incomplete (0 of 6 workflows have fallback mechanisms in typical state)
- Residual risk documentation missing
- No formal risk management process document
Action Required:For each High-Risk workflow:
- Create Risk Register:
| Risk ID | Risk Description | Likelihood | Severity | Impact on Rights | Mitigation | Residual Risk | Owner |
|---|---|---|---|---|---|---|---|
| RR-001 | Recruitment AI bias against women | Medium | High | Discrimination (Charter Article 21) | Bias audit quarterly, human review all rejections | Low (monitored) | HR Director |
| RC-001 | Credit scoring false negatives | Low | Critical | Access to financial services | Second AI model review, human appeal process | Medium (acceptable) | CRO |
- Document risk management process (procedure for identifying, assessing, mitigating risks)
- Assign risk owner (accountable for monitoring and mitigation)
- Review quarterly (continuous risk management)
Audit Readiness: Risk register demonstrates compliance with Article 9 obligations.
Article 10: Data and Data Governance
Requirement: Training, validation, and testing datasets must meet data quality criteria and be managed with appropriate governance.
Data Governance Obligations:
10.2: Design Choices
- Datasets appropriate for intended purpose
- Representative of use cases
- Relevant, error-free (to best knowledge)
10.3: Data Properties
- Examine for biases
- Identify gaps/shortcomings
- Determine suitability despite imperfections
10.4: Data Processing
- Appropriate measures for data quality:
- Relevance
- Representativeness
- Accuracy
- Completeness
- Consistency
10.5: Personal Data
- GDPR compliance for personal data processing
- Lawful basis, purpose limitation, data minimization
- Special categories of data (race, health, etc.) → explicit consent/legal basis
SignalBreak Evidence: 🟡 Partial — 2 of 4 providers have complete data governance documentation (OpenAI, Anthropic have public data policies; smaller providers may lack transparency).
Gap: 50% of providers missing:
- Data retention policies
- Data classification (PII handling)
- GDPR compliance attestations
- Training data provenance
Action Required:For each provider:
- Request Data Governance Documentation:
| Provider | Data Sheet | Training Data | GDPR Compliance | Retention Policy | SignalBreak Has? |
|---|---|---|---|---|---|
| OpenAI | ✅ Available | ✅ Public (filtered internet) | ✅ SOC 2, DPA | ✅ 30-day API logs | Yes |
| Anthropic | ✅ Available | ✅ Public + licensed | ✅ SOC 2, DPA | ✅ Configurable | Yes |
| Ollama (self-hosted) | ❌ N/A (self-hosted) | ⚠️ User-provided | ⚠️ User responsibility | ⚠️ User-controlled | Partial |
| Google Vertex AI | ✅ Available | ✅ Google datasets | ✅ ISO 27001, 27701 | ✅ Configurable | Yes |
For High-Risk AI using third-party models:
- Request model card (dataset description, known biases, performance metrics)
- Conduct bias audit (test for disparate impact on protected groups)
- Document data quality assessment
For self-hosted/fine-tuned models:
- Maintain dataset documentation (source, size, demographics)
- Conduct bias analysis (check for underrepresentation)
- Implement data quality monitoring (detect drift, label errors)
Estimated effort:
- Provider documentation: 2-4 hours per provider (review existing docs)
- Bias audit: 1-2 weeks per High-Risk workflow (requires statistical analysis)
Article 11: Technical Documentation
Requirement: High-Risk AI systems must have comprehensive technical documentation demonstrating compliance.
Documentation Requirements:
| Section | Required Content | SignalBreak Evidence |
|---|---|---|
| General description | System purpose, intended users, deployment context | ✅ Workflow descriptions (6/6) |
| Design specifications | Architecture, algorithms, data flow | 🟡 Provider documentation (varies) |
| Datasets | Training/validation/test data details | 🟡 Provider data sheets (2/4 complete) |
| Risk management | Risk register, mitigation measures | ❌ Requires Article 9 implementation |
| Performance metrics | Accuracy, precision, recall, fairness | ❌ Workflow-level testing needed |
| Human oversight | Oversight measures, capabilities, limitations | 🟡 Human-in-loop flags (3/6 workflows) |
| Cybersecurity | Security measures, vulnerability assessments | 🟡 Provider SOC 2 attestations |
| Conformity assessment | Test reports, certificates (if third-party assessed) | ❌ Post-compliance only |
SignalBreak Evidence: 🟡 Partial — Basic documentation (workflow descriptions, business context) exists, but High-Risk AI-specific technical documentation missing.
Gap:
- No formal technical documentation package
- Performance metrics not tracked
- Conformity assessment not conducted
Action Required:For each High-Risk workflow:
Create Technical Documentation Package (single PDF/document per workflow):
- Section 1: General description (use SignalBreak workflow description)
- Section 2: Design specifications (provider model card + your integration architecture)
- Section 3: Datasets (provider data sheet OR your training data documentation)
- Section 4: Risk management (link to Article 9 risk register)
- Section 5: Performance metrics (accuracy test results, bias audit reports)
- Section 6: Human oversight (describe human-in-loop procedures)
- Section 7: Cybersecurity (provider SOC 2 + your endpoint security)
- Section 8: Conformity assessment (if third-party assessed)
Store securely (must be available to authorities for 10 years after last use)
Update annually or when system changes materially
Template: Download EU AI Act Technical Documentation Template from EC website: https://ec.europa.eu/digital-strategy/our-policies/european-approach-artificial-intelligence_en
Estimated effort:
- Initial creation: 2-4 weeks per High-Risk workflow (substantial documentation)
- Annual update: 1-2 days per workflow
Article 12: Record-Keeping (Logging)
Requirement: High-Risk AI systems must have automatic logging capabilities to enable traceability.
Logging Requirements:
12.1: Logging Capabilities
- Logs automatically generated and maintained
- Ensure traceability of AI system functioning
- Logging level appropriate to intended purpose
12.2: Minimum Retention
- Logs retained for minimum 6 months (unless longer required by sector law)
- Example: Financial services may require 7 years under MiFID II
12.3: Log Contents:
| Event Type | What to Log | Example |
|---|---|---|
| Input data | User queries, uploaded files | Customer question submitted to chatbot |
| AI outputs | Decisions, recommendations, scores | Recruitment AI recommends "Reject" |
| Human oversight | Human interventions, overrides | HR manager overrides "Reject" to "Interview" |
| System changes | Model updates, config changes | Upgraded from GPT-4 to GPT-4 Turbo |
SignalBreak Evidence: ✅ Compliant — 7+ audit log entries recorded in last 30 days (workflow changes, provider binding updates).
What SignalBreak Logs:
- Workflow creation/modification/deletion
- Provider binding changes (model selection, fallback configuration)
- User actions (who made changes, when)
What SignalBreak Doesn't Log:
- Individual AI requests (your chatbot conversations)
- AI outputs (what the AI said to users)
- End-user interactions (customer queries)
Action Required:For High-Risk workflows:
Implement application-level logging (beyond SignalBreak governance logs):
- Log every AI request (input, output, timestamp, user ID)
- Retain for 6+ months (EU AI Act) or sector-specific requirement
- Encrypt logs (personal data protection)
Example logging architecture:
User Request → Your Application → OpenAI API
↓
Log to Database:
- Timestamp: 2026-01-26T10:15:30Z
- User ID: user_12345
- Input: "Analyze this resume"
- Output: "Candidate score: 75/100, Recommend: Interview"
- Model: gpt-4-turbo
- Human Override: None
- Retention: 6 months from creation- Implement log retention policy:
- Automatic deletion after retention period
- Backup for audit (authorities can request logs)
- GDPR-compliant (purpose limitation, data minimization)
Tools:
- Cloud logging: AWS CloudWatch, Google Cloud Logging, Azure Monitor
- Self-hosted: ELK Stack (Elasticsearch, Logstash, Kibana), Graylog
Estimated effort:
- Implementation: 1-2 weeks (application logging integration)
- Ongoing: £500-2k/month (log storage costs for high-volume AI)
Article 13: Transparency and Information to Users
Requirement: Users must be informed they are interacting with a High-Risk AI system and understand its capabilities/limitations.
Transparency Obligations:
13.1: User Disclosure
- Inform users AI system is being used
- Explain system's purpose and capabilities
- Disclose limitations and conditions where system may underperform
13.2: Information Content:
| What to Disclose | Example |
|---|---|
| Purpose | "This AI screens resumes to identify suitable candidates" |
| How it works | "AI analyses keywords, experience, education" |
| Capabilities | "Can process 1,000 resumes in 10 minutes" |
| Limitations | "May not understand unconventional career paths" |
| When it may fail | "Less accurate for non-English resumes" |
| Human oversight | "All AI recommendations reviewed by HR manager" |
13.3: Target Audience
- Tailor disclosure to deployer (if B2B) or end user (if B2C)
- Language: Clear, concise, appropriate for audience
SignalBreak Evidence: ✅ Compliant — 2 chatbot/agent workflows identified, 6 of 6 workflows have transparency documentation (business context).
Gap: User disclosure mechanisms not implemented (no front-end banners, terms of service mentions).
Action Required:For High-Risk workflows:
- Implement user disclosure:
Example: Recruitment AI
╔═══════════════════════════════════════════════════════╗
║ 🤖 AI-Assisted Recruitment ║
║ ║
║ Your application will be screened by an AI system ║
║ to identify suitable candidates. A human HR team ║
║ member will review all AI recommendations before ║
║ making final decisions. ║
║ ║
║ Learn more: [Link to AI Policy] ║
╚═══════════════════════════════════════════════════════╝Example: Credit Scoring AI
Your credit application will be assessed using automated decision-making.
The AI considers [list factors: income, credit history, etc.].
You have the right to:
- Request human review of the decision (GDPR Article 22)
- Access the logic behind the decision
- Contest the decision
Contact us: ai-decisions@company.comUpdate Terms of Service:
- Add "AI Use Disclosure" section
- List High-Risk AI systems
- Explain rights (GDPR Article 22 right to human review)
Create AI Transparency Page:
- Public-facing page explaining AI use
- Link from user-facing applications
- Update annually or when AI changes
Estimated effort:
- Implementation: 1-2 days (front-end banners, legal review)
- Legal review: £1k-3k (external counsel for T&C updates)
Article 14: Human Oversight
Requirement: High-Risk AI systems must be designed for effective human oversight to prevent/minimize risks.
Human Oversight Requirements:
14.1: Design for Oversight
- System enables human oversight measures
- Humans can:
- Fully understand AI capabilities/limitations
- Monitor AI operation
- Interpret AI outputs
- Intervene or interrupt AI (stop button)
- Disregard, override, or reverse AI outputs
14.2: Oversight Measures:
| Measure | Description | Example |
|---|---|---|
| Identify risks | Humans detect anomalies, errors | HR manager notices AI rejecting all candidates over 50 |
| Stop system | Ability to halt AI operation | Emergency stop for autonomous vehicle AI |
| Override outputs | Change AI decision | Credit officer overrides AI "Reject" decision |
| Competent humans | Trained, knowledgeable users | HR staff trained on AI bias, limitations |
14.3: Limits on Automation
- Humans not overly reliant on AI
- No "automation bias" (blind trust in AI)
- Meaningful human oversight (not rubber-stamping)
SignalBreak Evidence: 🟡 Partial — 2 of 2 Critical workflows have human-in-the-loop controls, 3 of 6 total workflows with human oversight.
Gap:
- Only 50% of workflows have human oversight enabled
- No documentation of oversight procedures
- No training records for human overseers
Action Required:For each High-Risk workflow:
Enable Human-in-Loop:
- Set
human_in_loop: truein SignalBreak workflow - Document oversight procedure
- Set
Create Oversight Procedure:
Template: Human Oversight Procedure
Workflow: Recruitment Resume Screening AI
Human Overseer: HR Manager (Jane Smith)
Oversight Measures:
1. Review: HR Manager reviews ALL AI recommendations (Accept/Reject)
2. Override Authority: HR Manager can accept AI-rejected candidates
3. Stop Condition: If AI rejects >90% of candidates, halt screening and investigate
4. Training: HR Manager completed "AI Bias Awareness" training (Annual)
5. Escalation: Unusual patterns escalated to HR Director
Override Criteria:
- Candidate has unique skills not recognized by AI
- AI may have discriminated based on protected characteristic
- Human judgment indicates AI error
Monitoring:
- Weekly: Review override rate (target <10% of candidates)
- Monthly: Analyze AI performance vs. human decisions
- Quarterly: Retrain AI if drift detectedTrain Human Overseers:
- AI limitations and failure modes
- Bias awareness (protected characteristics)
- When to override AI decisions
- Record training attendance (audit evidence)
Monitor Override Rates:
- Track how often humans override AI
- Low override rate (<5%) → automation bias risk
- High override rate (>30%) → AI underperforming, retrain
Estimated effort:
- Procedure creation: 1-2 days per High-Risk workflow
- Training: 2-4 hours per overseer (initial), 1 hour annual refresher
- Monitoring: 2 hours/month per workflow
Article 15: Accuracy, Robustness, and Cybersecurity
Requirement: High-Risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity.
Accuracy Requirements:
15.1: Appropriate Accuracy
- Performance levels appropriate to intended purpose
- Eliminate/reduce risks to health, safety, fundamental rights
- Trade-offs: Accuracy vs. fairness (bias mitigation may reduce accuracy)
15.2: Metrics and Thresholds:
| Metric | Definition | Example Threshold |
|---|---|---|
| Accuracy | % of correct predictions | Recruitment AI: >85% accuracy on test set |
| Precision | % of positive predictions that are correct | Fraud detection: >90% precision (low false positives) |
| Recall | % of actual positives correctly identified | Medical diagnosis: >95% recall (catch all diseases) |
| Fairness | Disparate impact across groups | <10% difference in acceptance rate by gender |
Robustness Requirements:
15.3: Resilience
- Resistant to errors, faults, inconsistencies
- Handles edge cases gracefully
- Example: AI chatbot doesn't crash on unusual inputs
15.4: Adversarial Robustness
- Resistant to manipulation attempts
- Example: Recruitment AI detects resume keyword stuffing
Cybersecurity Requirements:
15.5: Security Measures
- Protection against unauthorized access
- Data poisoning defenses (malicious training data)
- Model stealing prevention
SignalBreak Evidence: ❌ Non-Compliant — No workflow-level accuracy/robustness testing documented.
Gap:
- No performance metrics tracked
- No adversarial testing conducted
- Provider security (SOC 2) documented, but workflow-level security not assessed
Action Required:For each High-Risk workflow:
- Define Performance Thresholds:
| Workflow | Metric | Threshold | Rationale |
|---|---|---|---|
| Recruitment AI | Accuracy | >85% | Balance accuracy vs. fairness |
| Recruitment AI | Fairness (gender) | <10% disparity | Legal requirement (Equality Act 2010) |
| Credit Scoring | Precision | >90% | Minimize false rejections (customer satisfaction) |
| Credit Scoring | Recall | >80% | Minimize false approvals (credit risk) |
Conduct Performance Testing:
- Create test dataset (representative of real users)
- Run AI predictions on test set
- Calculate metrics (accuracy, precision, recall, fairness)
- Document results in Technical Documentation (Article 11)
Implement Monitoring:
- Track performance metrics in production
- Alert if metrics drop below thresholds
- Retrain model if performance degrades
Adversarial Testing:
- Red team: Attempt to manipulate AI outputs
- Example: Recruitment AI — submit keyword-stuffed resume
- Document vulnerabilities and mitigations
Cybersecurity Assessment:
- Penetration testing of AI endpoints
- Access control review (who can modify AI?)
- Data encryption (training data, model weights, logs)
Tools:
- Performance monitoring: ML ops platforms (MLflow, Weights & Biases, Kubeflow)
- Fairness testing: Fairlearn (Microsoft), AI Fairness 360 (IBM), Aequitas
- Adversarial testing: CleverHans, Adversarial Robustness Toolbox (ART)
Estimated effort:
- Initial testing: 2-4 weeks per High-Risk workflow
- Ongoing monitoring: £500-2k/month (tooling + staff time)
- Annual security assessment: £5k-15k (external pen test)
Title IV: Transparency Obligations for Limited-Risk AI
Article 52: Transparency Obligations for Certain AI Systems
Requirement: Limited-risk AI systems (chatbots, emotion recognition, biometric categorization, deepfakes) must disclose AI use to users.
Limited-Risk Categories:
52.1: Chatbots and Conversational AI
- Users must be informed they are interacting with AI
- Exception: Obvious from context (e.g., branded AI assistant)
- Penalty: €7.5M or 1.5% of global turnover
52.2: Emotion Recognition Systems
- Inform users when AI detects emotions
- Example: Call center AI analyzing customer frustration
52.3: Biometric Categorization
- Inform users when AI infers demographics (age, gender, race)
- Example: Retail AI analyzing shopper demographics
52.4: AI-Generated Content (Deepfakes)
- Watermark AI-generated images, video, audio
- Disclose AI use prominently
- Exception: Creative works (art, satire) with disclosure
SignalBreak Evidence: 🟡 Partial — 2 chatbot/agent workflows identified, but user disclosure mechanisms not fully implemented.
Gap:
- No front-end disclosure banners
- No watermarking for AI-generated content
- Terms of Service don't mention AI use
Action Required:For chatbots:
- Implement disclosure banner:
<!-- Example: Front-end chatbot widget -->
<div class="ai-disclosure-banner">
<span>🤖</span> You're chatting with an AI assistant.
<a href="/ai-policy">Learn more</a>
</div>- Exception assessment:
- Is AI use "obvious from context"?
- If branded "AI Assistant" → may not need disclosure
- If looks like human (no branding) → disclosure required
For emotion recognition:
- Add disclosure:
- Before analysis: "This call may be analyzed for quality assurance and customer satisfaction" (mention AI if used)
- During call: "Your tone indicates frustration. Would you like to speak to a supervisor?"
For AI-generated content:
- Implement watermarking:
- Images: Embed metadata (EXIF tag: "AI-generated")
- Video: C2PA standard (Coalition for Content Provenance and Authenticity)
- Audio: Disclose in description/caption
Estimated effort:
- Chatbot disclosure: 1-2 hours (front-end banner)
- Emotion recognition: 1 day (legal review, user notification)
- Watermarking: 1-2 weeks (implement C2PA standard)
Title VIII: Post-Market Monitoring
Article 72: Post-Market Monitoring System
Requirement: High-Risk AI providers must establish post-market monitoring to collect and analyze data on AI performance after deployment.
Post-Market Monitoring Obligations:
72.1: Monitoring Plan
- Document systematic procedure for monitoring
- Collect data on AI performance in real-world use
- Identify trends indicating safety/rights risks
72.2: Data Collection:
| Data Type | What to Collect | Example |
|---|---|---|
| Performance metrics | Accuracy, precision, recall in production | Recruitment AI: 82% accuracy (below 85% threshold) |
| Incidents | Failures, errors, unexpected behavior | Credit scoring AI: 15 false rejections last month |
| User feedback | Complaints, concerns, suggestions | Customers report chatbot gives incorrect tax advice |
| Context changes | Drift in user population, use cases | Recruitment AI now used for senior roles (originally designed for entry-level) |
72.3: Serious Incidents (Article 73)
- Report serious incidents to authorities within 15 days
- Serious incident: Death, serious health damage, fundamental rights violation
- Example: Biometric AI wrongly identifies person as terrorist → arrest
SignalBreak Evidence: 🟡 Partial — Audit logging in place (Article 12), provider health monitoring active (5,000+ signals), but systematic post-market monitoring procedures not established.
Gap:
- No formal post-market monitoring plan
- No incident reporting procedure
- Provider monitoring (external) exists, but workflow performance monitoring (internal) missing
Action Required:For each High-Risk workflow:
- Create Post-Market Monitoring Plan:
Template:
Workflow: Recruitment Resume Screening AI
Monitoring Responsible: HR Director
Data Collection:
- Daily: Accuracy metrics (from application logs)
- Weekly: User feedback (HR team surveys)
- Monthly: Bias analysis (gender/age/race disparities)
- Quarterly: Performance review vs. initial test results
Incident Definition:
- Serious: AI discriminates against protected group (e.g., rejects all candidates over 50)
- Moderate: Accuracy drops below 85% threshold
- Minor: Isolated errors (1-2 incorrect scores per week)
Escalation:
- Serious → Report to authorities within 15 days + halt system
- Moderate → Investigate root cause, retrain model if needed
- Minor → Log and monitor for patterns
Review Frequency:
- Monthly: Review monitoring data
- Quarterly: Update monitoring plan if context changesImplement Incident Reporting:
- Create incident response playbook
- Train staff on serious incident criteria
- Designate authority contact (EU Member State AI authority)
Establish Feedback Loop:
- Collect user feedback (HR team, end users)
- Analyze feedback for patterns
- Update AI system based on learnings (continuous improvement)
Estimated effort:
- Plan creation: 1-2 days per High-Risk workflow
- Ongoing monitoring: 4-8 hours/month per workflow (data analysis, reporting)
EU AI Act Compliance Roadmap
Immediate (Next 30 Days) — Critical
Priority 1: Risk Classification (Article 6)
✅ Action: Classify all AI workflows as Prohibited/High-Risk/Limited-Risk/Minimal-Risk
Steps:
- Review all workflows against Annex III categories
- Create Risk Classification Register (see Article 6 template above)
- Document justification for each classification
- Identify High-Risk workflows requiring full compliance
Deliverable: Risk Classification Register (PDF/spreadsheet)
Owner: Chief AI Officer, Legal Counsel
Effort: 1-2 weeks (includes legal review)
Priority 2: Prohibited AI Audit (Article 5)
✅ Action: Verify no workflows violate prohibited AI practices
Steps:
- Review Article 5 prohibited practices (see table above)
- Self-certify compliance (all "No" answers)
- Document audit results
Deliverable: Article 5 Compliance Certification (1-page doc)
Owner: Compliance Officer
Effort: 2-4 hours
Priority 3: Data Governance Documentation (Article 10)
✅ Action: Complete data governance documentation for all providers
Steps:
- Request data sheets from providers (OpenAI, Anthropic, etc.)
- For High-Risk AI: Conduct bias audit on training data
- Document GDPR compliance for personal data processing
Deliverable: Provider Data Governance Package (per provider)
Owner: Data Protection Officer (DPO)
Effort: 1-2 weeks (depends on provider responsiveness)
Short-Term (Next 90 Days) — Important
Priority 4: Implement Fallback Mechanisms (Article 9)
✅ Action: Configure fallback providers for all Critical and High-Risk workflows
Steps:
- Add fallback provider bindings in SignalBreak (see Article 9 risk mitigation)
- Test failover procedures (simulate primary provider outage)
- Document fallback strategy in Risk Register
Deliverable: Fallback configuration + test results
Owner: Engineering Manager
Effort: 1-2 weeks (implementation + testing)
Priority 5: User Disclosure Procedures (Article 13, Article 52)
✅ Action: Implement user disclosure for chatbot/agent systems and High-Risk AI
Steps:
- Add front-end disclosure banners (see Article 13 examples)
- Update Terms of Service (AI Use Disclosure section)
- Create AI Transparency Page (public-facing)
Deliverable: Disclosure implementation + legal T&C update
Owner: Product Manager, Legal Counsel
Effort: 1-2 weeks (front-end dev + legal review)
Priority 6: Post-Market Monitoring Framework (Article 72)
✅ Action: Establish post-market monitoring for High-Risk AI
Steps:
- Create Post-Market Monitoring Plan (see Article 72 template)
- Implement monitoring dashboards (accuracy, feedback, incidents)
- Train staff on incident reporting (serious incidents → 15-day deadline)
Deliverable: Post-Market Monitoring Plan + dashboards
Owner: AI Operations Lead
Effort: 2-4 weeks (tooling + process documentation)
Medium-Term (Next 180 Days) — Comprehensive
Priority 7: Technical Documentation Packages (Article 11)
✅ Action: Create comprehensive technical documentation for High-Risk AI
Steps:
- Use EU AI Act Technical Documentation Template (EC website)
- Complete all 8 sections (see Article 11 requirements)
- Store securely (10-year retention requirement)
Deliverable: Technical Documentation Package (per High-Risk workflow)
Owner: AI Product Manager, Technical Writer
Effort: 2-4 weeks per High-Risk workflow
Priority 8: Quality Management System (Article 17)
✅ Action: Implement quality management system (QMS) for High-Risk AI
Steps:
- Adopt existing QMS framework (ISO 9001, ISO 13485) or create EU AI Act-specific QMS
- Document quality policies, procedures, controls
- Conduct internal audits (quarterly)
Deliverable: QMS documentation + audit reports
Owner: Quality Manager, Compliance Officer
Effort: 2-3 months (substantial organizational change)
Priority 9: Conformity Assessment (Article 43)
✅ Action: Conduct conformity assessment for High-Risk AI (self-assessment or third-party)
Steps:
- Self-assessment: Internal audit against all EU AI Act requirements (Articles 9-15)
- Third-party assessment: Engage Notified Body (for Annex III listed systems) or accredited assessor
- Issue Declaration of Conformity (DoC)
Deliverable: Conformity Assessment Report + Declaration of Conformity
Owner: Compliance Officer, External Assessor
Effort: 1-2 months (self-assessment), 3-6 months (third-party)
Cost: £0 (self), £20k-60k (third-party)
Regulatory Risk Assessment
Current Risk Exposure (Typical Organization Without Compliance)
Critical Compliance Gaps:
| Gap | Article | Penalty Exposure | Likelihood (if audited) |
|---|---|---|---|
| No High-Risk classification | Article 6 | €15M or 3% turnover | 90% (easy to detect) |
| Missing risk management | Article 9 | €15M or 3% turnover | 80% (high-risk AI without risk register) |
| Incomplete data governance | Article 10 | €15M or 3% turnover | 70% (provider docs exist, but bias audit missing) |
| No technical documentation | Article 11 | €15M or 3% turnover | 60% (some docs exist, but not EU AI Act format) |
Total Worst-Case Exposure: €60M or 12% of global annual turnover (if multiple violations)
Realistic First-Audit Penalty: €1M-5M (authorities typically issue warnings first, fines for repeated violations)
Probability of Audit
Enforcement Timeline:
| Phase | Timing | Audit Probability |
|---|---|---|
| Grace period | Feb 2025 - Aug 2026 | Low (5-10%) — authorities focused on education |
| Initial enforcement | Aug 2026 - Aug 2027 | Medium (20-40%) — spot checks, complaint-driven |
| Steady state | Aug 2027+ | High (50-70%) for high-risk AI in regulated sectors |
Audit Triggers:
| Trigger | Probability | Example |
|---|---|---|
| User complaint | High | Candidate reports discriminatory recruitment AI |
| Whistleblower | Medium | Employee reports non-compliant AI to authorities |
| Sector sweep | Medium | Financial regulator audits all banks' credit scoring AI |
| Random selection | Low | Statistical sampling by Member State authority |
| Serious incident | Very High | AI causes harm → mandatory investigation |
Highest-Risk Sectors for Audit:
- Finance (credit scoring, fraud detection)
- Employment (recruitment, HR analytics)
- Healthcare (diagnostic AI, treatment recommendations)
- Law enforcement (predictive policing, case analysis)
- Education (student assessment, admissions)
Estimated Compliance Investment
Cost Breakdown:
| Activity | Internal Effort | External Costs | Timeline |
|---|---|---|---|
| Legal counsel (EU AI Act specialist) | 20-40 hours | £10k-25k | Ongoing (initial + annual) |
| Risk classification | 40-80 hours | £0 | 30 days |
| Data governance | 40-80 hours | £5k-15k (bias audits) | 90 days |
| Technical documentation | 80-160 hours | £0-10k (templates, tools) | 180 days |
| Human oversight implementation | 80-160 hours | £10k-30k (training, tooling) | 90 days |
| Post-market monitoring | 40-80 hours | £5k-15k (dashboards, analytics) | 90 days |
| Conformity assessment | 80-160 hours | £20k-60k (third-party assessor) | 180 days |
Total Estimated Investment:
| Organization Size | Internal Hours | External Costs | Total (loaded cost) |
|---|---|---|---|
| SME (<100 employees, 5-10 AI workflows) | 400-800 hours | £50k-155k | £100k-£250k |
| Mid-market (100-1,000 employees, 10-30 workflows) | 800-1,600 hours | £100k-300k | £200k-£500k |
| Enterprise (1,000+ employees, 30+ workflows) | 1,600-3,200 hours | £200k-600k | £400k-£1M |
Assumptions:
- Loaded internal cost: £100/hour (salary + overhead)
- External specialist rates: £200-400/hour (legal, consulting)
Next Steps
Legal Review: Engage EU AI Act specialist counsel within 7 days
- Validate risk classification approach
- Review compliance gaps specific to your industry
- Assess liability exposure
Risk Classification Workshop: Complete systematic review within 21 days
- Convene cross-functional team (Legal, Compliance, Engineering, Product)
- Map all AI workflows to Annex III categories
- Document High-Risk systems requiring full compliance
Documentation Remediation: Address critical gaps within 30 days
- Complete provider data governance documentation
- Self-certify Article 5 prohibited AI compliance
- Create Risk Classification Register
Technical Controls: Implement required safeguards within 60-90 days
- Fallback mechanisms for High-Risk workflows
- User disclosure for chatbots and High-Risk AI
- Logging and record-keeping systems
Ongoing Monitoring: Establish quarterly compliance assessment schedule
- Post-market monitoring reviews
- Incident reporting drills
- Annual technical documentation updates
Common Questions
Does the EU AI Act apply to my organization if I'm based outside the EU?
Yes, if:
- ✅ You provide AI systems placed on the EU market (sold to EU customers)
- ✅ Your AI outputs are used in the EU (even if system hosted elsewhere)
- ✅ You're a deployer using third-party AI for EU operations
No, if:
- ❌ You only operate in non-EU markets with no EU customers
- ❌ Your AI is used exclusively for military, defense, national security (Article 2.3 exemption)
- ❌ Your AI is for research/development only (not placed on market)
Example:
- US SaaS company with EU customers → Covered
- UK fintech using AI for UK-only lending → Not covered (but may be if UK customers apply to EU bank)
- Indian outsourcing firm providing AI services to EU client → Covered (deployer responsibility, but provider obligations apply to you)
Can SignalBreak alone get me EU AI Act compliant?
No. SignalBreak provides ~50-60% of evidence, but EU AI Act requires:
What SignalBreak provides:
- ✅ AI system inventory (Article 11)
- ✅ Provider monitoring (Article 72 post-market monitoring)
- ✅ Audit logs (Article 12 record-keeping)
- ✅ Workflow categorization (helps with Article 6 classification)
What you still need:
- ❌ Formal High-Risk classification (Article 6) — Legal review required
- ❌ Risk management system (Article 9) — Risk register, mitigation plans
- ❌ Data governance (Article 10) — Bias audits, GDPR compliance
- ❌ Technical documentation (Article 11) — Comprehensive docs per EU format
- ❌ User disclosure (Article 13, 52) — Front-end banners, T&C updates
- ❌ Conformity assessment (Article 43) — Self-assessment or third-party audit
Analogy: SignalBreak is like GitHub for EU AI Act — it tracks your AI systems and changes, but you still need compliance processes and legal review.
What happens if I classify my AI as "Minimal-Risk" and authorities disagree?
Risk: If authorities determine your AI is actually High-Risk (Annex III listed), you could face:
- €15M or 3% of global turnover (non-compliance with High-Risk obligations)
- Retroactive compliance requirements (implement Articles 9-15 immediately)
- Market ban (until compliance demonstrated)
Mitigation:
- Conservative classification: When in doubt, classify as High-Risk
- Legal review: Engage EU AI Act specialist to validate classification
- Document justification: Explain why Minimal-Risk (burden of proof on you)
- Periodic review: Re-classify as business context changes
Gray Area Example:
- Recruitment AI screening resumes (NOT final hiring decision)
- High-Risk interpretation: Annex III #4 "Employment" includes resume screening (affects access to employment)
- Minimal-Risk interpretation: Only affects candidate pool, humans make final decision (not safety component)
- Recommendation: Classify as High-Risk (conservative approach)
Best Practice: Engage Member State authority for advance guidance (some countries offer "sandbox" programs for compliance support).
How does GDPR relate to the EU AI Act?
Relationship:
| Law | Scope | Focus |
|---|---|---|
| GDPR | Personal data processing | Privacy, data protection rights |
| EU AI Act | AI systems (regardless of personal data) | Safety, fundamental rights, trustworthiness |
Overlap:
| AI Act Provision | GDPR Connection |
|---|---|
| Article 10 (Data Governance) | Must comply with GDPR for personal data in training datasets |
| Article 13 (Transparency) | Builds on GDPR Article 13-14 (right to information) |
| Article 22 GDPR (Automated Decisions) | High-Risk AI involving automated decisions = GDPR Article 22 compliance required |
Complementary Obligations:
- GDPR: Right to human review of automated decision (Article 22.3)
- EU AI Act: Human oversight for High-Risk AI (Article 14)
- Practical result: High-Risk AI involving personal data requires BOTH GDPR + EU AI Act compliance
Example: Recruitment AI
- EU AI Act: Classify as High-Risk (Annex III #4), implement Articles 9-15
- GDPR: Provide candidates with right to contest automated decision, explain logic (Article 22)
- Combined: Human oversight (EU AI Act) + right to human review (GDPR)
What's the timeline for authorities to start enforcing the EU AI Act?
Enforcement Phases:
| Phase | Timeline | Authority Focus | Penalty Likelihood |
|---|---|---|---|
| Grace period | Feb 2025 - Aug 2026 | Education, guidance, warnings | Low (warnings, no fines) |
| Initial enforcement | Aug 2026 - Aug 2027 | Complaint-driven audits, sector sweeps | Medium (first fines for egregious violations) |
| Full enforcement | Aug 2027+ | Proactive audits, annual inspections | High (systematic enforcement) |
Member State Readiness:
- Germany, France, Netherlands: Likely early enforcers (strong regulatory capacity)
- Southern/Eastern Europe: May lag (resource constraints)
- Commission role: Can initiate infringement proceedings against Member States for under-enforcement
Practical Advice:
- By Aug 2026: High-Risk AI must be compliant (24-month grace period from Feb 2025)
- By Aug 2027: All provisions enforceable (authorities can audit any time)
- Best practice: Achieve compliance by Q2 2026 to avoid rushed implementation
Related Documentation
- Governance Overview — Comparison of ISO 42001, NIST AI RMF, EU AI Act
- ISO 42001 Guide — Certifiable AI management system
- NIST AI RMF Guide — US federal risk framework
- Evidence Packs Guide — How to generate and use evidence packs
- Risk Scoring Methodology — Understanding your score
External Resources
- EU AI Act (Official Text): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
- European Commission AI Act Page: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- Annex III (High-Risk AI List): Full list in official text (searchable)
- Technical Documentation Template: EC website
- C2PA Standard (Content Provenance): https://c2pa.org/ (for Article 52 watermarking)
- GDPR Portal: https://gdpr.eu/ (for Article 10 personal data compliance)
Last updated: 2026-01-26Based on: Regulation (EU) 2024/1689 (August 1, 2024), Phased Implementation Timeline
⚠️ Legal Disclaimer: This guide provides general information about the EU Artificial Intelligence Act. It does not constitute legal advice or regulatory compliance certification. The EU AI Act imposes significant penalties for non-compliance (up to €35 million or 7% of global annual turnover, whichever is higher). Consult qualified legal counsel and EU regulatory experts for comprehensive compliance guidance specific to your organization and jurisdiction.