Skip to content

EU Artificial Intelligence Act Guide

What is the EU AI Act?

The EU Artificial Intelligence Act (AI Act) is the world's first comprehensive legal framework for artificial intelligence, establishing harmonized rules for the development, marketing, and use of AI systems in the European Union.

Official Name: Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence

Published: August 1, 2024

Enacted By: European Parliament and Council of the European Union

Framework Type: Mandatory legal regulation (not voluntary)

Geographic Scope:

  • All AI systems placed on the EU market or put into service in the EU
  • AI system outputs used in the EU (extraterritorial effect)
  • Applies regardless of where the provider/deployer is established

Why the EU AI Act Matters for AI Governance

1. Mandatory Compliance (Not Voluntary)

Unlike NIST AI RMF or ISO 42001, the EU AI Act is legally binding:

FrameworkTypePenalties for Non-Compliance
EU AI ActLegal regulationUp to €35 million or 7% of global annual turnover (whichever is higher)
ISO 42001Voluntary standardNo legal penalties (reputational risk only)
NIST AI RMFVoluntary frameworkNo legal penalties (federal procurement risk only)

This matters because:

  • Criminal liability for prohibited AI uses (Article 5)
  • Administrative fines up to €35M for high-risk system violations
  • Directors' personal liability in some member states
  • Market surveillance authorities can ban non-compliant AI systems

Penalties by Violation Type:

ViolationMaximum FineExample
Prohibited AI (Article 5)€35M or 7% turnoverSocial scoring, subliminal manipulation
High-Risk non-compliance (Articles 9-15)€15M or 3% turnoverInadequate risk management, missing documentation
Information obligations (Articles 49-54)€7.5M or 1.5% turnoverFailure to report serious incidents

2. Risk-Based Regulatory Approach

The EU AI Act categorizes AI systems into 4 risk levels with escalating obligations:

Prohibited AI → High-Risk → Limited-Risk → Minimal-Risk
 (Banned)      (Strict)      (Transparency) (No obligations)

Risk-Based Philosophy:

  • Higher risk = stricter requirements
  • Most AI systems fall into Limited/Minimal (light-touch)
  • Only ~5-10% of AI systems are High-Risk (heavy regulation)

Comparison to Other Frameworks:

FrameworkRisk ApproachObligations
EU AI Act4-tier (Prohibited/High/Limited/Minimal)Tier-specific legal requirements
NIST AI RMFContinuous risk scaleOrganization defines risk tolerance
ISO 42001Risk-based management systemAll systems managed proportionally

3. Extraterritorial Application

The EU AI Act applies beyond EU borders:

You're covered if:

  • ✅ You provide AI systems placed on the EU market
  • ✅ You deploy AI systems in the EU (regardless of where you're based)
  • ✅ Your AI outputs are used in the EU (even if system hosted elsewhere)

Example Scenarios:

ScenarioCovered by EU AI Act?Why?
US company provides AI chatbot to EU customers✅ YesSystem placed on EU market
UK company uses AI recruitment tool for UK staff❌ No (but may be if UK-based candidates apply to EU jobs)No EU nexus
Indian company's AI content moderation used by EU social media platform✅ YesAI output used in EU
Japanese company develops AI for robots sold globally including EU✅ YesPlaced on EU market

Practical Impact:

  • Non-EU providers must comply if they serve EU customers
  • Providers' responsibilities extend to distributors, importers, deployers
  • Third-party liability: Using non-compliant AI can expose deployers to fines

4. Phased Implementation Timeline

The EU AI Act has staggered effective dates (not all at once):

Effective DateWhat Takes EffectAffected Parties
February 2, 2025Prohibited AI practices (Article 5)All providers/deployers
August 2, 2025General Purpose AI (GPAI) obligations (Article 51-56)Foundation model providers (OpenAI, Anthropic, etc.)
August 2, 2026High-risk AI system requirements (Titles III-IV)High-risk system providers/deployers
August 2, 2027Full regulation in forceAll provisions applicable

Grace Periods:

  • 6 months from Feb 2025: Prohibited AI ban
  • 24 months from Feb 2025: High-risk systems compliance
  • 36 months from Feb 2025: Full enforcement

Current Status (January 2026):

  • 🔴 Prohibited AI ban: ACTIVE (6-month grace period ended Aug 2, 2025)
  • 🟠 GPAI obligations: IMMINENT (Aug 2, 2025)
  • 🟡 High-risk requirements: 7 months until deadline (Aug 2, 2026)

How SignalBreak Maps to EU AI Act

SignalBreak provides automated evidence generation for key EU AI Act articles. Your workflows, scenarios, and provider monitoring support compliance with:

Article Coverage Summary

EU AI Act ProvisionSignalBreak EvidenceStatus
Article 5: Prohibited AIWorkflow audit (no prohibited use cases detected)🟢 Compliant
Article 6: High-Risk ClassificationWorkflow categorization (requires manual risk classification)🔴 Gap
Article 9: Risk ManagementScenario analysis, risk scoring🟡 Partial
Article 10: Data GovernanceProvider data policies (2/4 providers complete)🟡 Partial
Article 11: Technical DocumentationWorkflow descriptions, provider bindings🟢 Compliant
Article 12: Record-KeepingAudit logs (7+ entries/month)🟢 Compliant
Article 13: TransparencyChatbot workflow identification🟢 Compliant
Article 14: Human OversightHuman-in-loop flags (3/6 workflows)🟡 Partial
Article 52: User DisclosureChatbot transparency (requires implementation)🟡 Partial
Article 72: Post-Market MonitoringProvider health tracking (5,000+ signals)🟢 Compliant

Overall Readiness: Typically 50-70% for organizations with SignalBreak (varies by risk classification)


EU AI Act Risk Classification

The 4-Tier System

Tier 1: Prohibited AI (Article 5) — BANNED

Definition: AI systems that pose unacceptable risks to fundamental rights and safety.

Prohibited Practices:

Prohibited UseExamplePenalty
Subliminal manipulationAI that manipulates behavior to cause harm€35M or 7% turnover
Social scoring by authoritiesGovernment assigns social credit scores€35M or 7% turnover
Real-time biometric identification (public spaces)Live facial recognition for mass surveillance€35M or 7% turnover
Exploiting vulnerabilitiesAI targeting children's psychological weaknesses€35M or 7% turnover

SignalBreak Evidence:Compliant — Workflow audit shows no prohibited AI use cases in typical implementations.

Self-Certification: Review all workflows against Article 5 prohibited practices. If none apply, document compliance.


Tier 2: High-Risk AI (Articles 6-29) — STRICT REGULATION

Definition: AI systems listed in Annex III or used as safety components of regulated products.

Annex III High-Risk Categories:

CategoryExample Use CasesTypical Workflows
Biometric identificationFacial recognition for access controlSecurity access workflows
Critical infrastructureAI managing energy grids, water systemsIndustrial control workflows
Education/trainingAI grading, student assessmentEducational AI systems
EmploymentAI recruitment, performance evaluationHR AI workflows
Essential servicesAI credit scoring, benefit eligibilityFinancial/government services
Law enforcementPredictive policing, crime risk assessmentPolice/judicial AI
Migration/asylumAI visa decisions, border controlImmigration systems
JusticeAI legal research affecting case outcomesLegal tech AI

High-Risk Obligations (if applicable):

ObligationArticleDescription
Risk management systemArt 9Continuous risk assessment + mitigation
Data governanceArt 10Training data quality, provenance, bias checks
Technical documentationArt 11System specifications, datasets, test results
Record-keepingArt 12Automated logs for audit (min 6 months)
TransparencyArt 13User disclosure of AI use
Human oversightArt 14Human-in-loop for high-risk decisions
Accuracy/robustnessArt 15Performance metrics, cybersecurity

SignalBreak Evidence for High-Risk: 🔴 Critical gap — Most organizations using SignalBreak haven't formally classified workflows as High-Risk (Article 6 violation).

Action Required:

  1. Review all workflows against Annex III categories
  2. Document High-Risk classification (or justify why not High-Risk)
  3. Implement additional obligations if High-Risk identified

Tier 3: Limited-Risk AI (Article 52) — TRANSPARENCY ONLY

Definition: AI systems that interact with humans or generate/manipulate content.

Limited-Risk Categories:

TypeExampleTransparency Requirement
ChatbotsCustomer service AI, virtual assistantsDisclose AI use unless obvious
Emotion recognitionAI detecting user emotionsInform users before use
Biometric categorizationAI inferring demographics from photosUser notification required
DeepfakesAI-generated images/video/audioWatermark + disclosure

Limited-Risk Obligations:

  1. User disclosure: "You are interacting with an AI system"
  2. Watermarking: For AI-generated content (images, video, audio)
  3. Detect/label: AI-generated content from your systems

SignalBreak Evidence: 🟡 Partial — Chatbot workflows identified (2/6 typical implementations), but disclosure mechanisms not fully implemented.

Action Required: Implement user disclosure for chatbot workflows:

  • Front-end banner: "This conversation is powered by AI"
  • Terms of service: Mention AI use
  • Opt-out mechanism (where feasible)

Tier 4: Minimal-Risk AI — NO OBLIGATIONS

Definition: All other AI systems not in Prohibited/High/Limited categories.

Examples:

  • AI spam filters
  • AI inventory management
  • AI recommendation engines (non-manipulative)
  • AI translation tools
  • AI data analysis (internal use)

Obligations:None — Minimal-risk AI can be developed and deployed without EU AI Act compliance (but GDPR, sector laws still apply).

SignalBreak Evidence:Compliant — Most workflows (4-5 of 6) typically fall into Minimal-Risk category.

Best Practice: Document why each workflow is Minimal-Risk (proves you've conducted risk assessment per Article 6).


EU AI Act Article-by-Article Guide

Title II: Prohibited AI Practices

Article 5: Prohibited Artificial Intelligence Practices

Requirement: Certain AI practices are completely banned due to unacceptable risks to fundamental rights.

Prohibited Practices (Detailed):

5.1(a): Subliminal Manipulation

  • AI that deploys subliminal techniques to materially distort behavior
  • Causing physical/psychological harm
  • Example: Hidden audio cues in AI-generated ads to manipulate purchasing

5.1(b): Exploiting Vulnerabilities

  • AI targeting vulnerable groups (children, disabled, elderly)
  • Example: AI chatbot designed to extract money from elderly users

5.1(c): Social Scoring by Authorities

  • Public authority assigns social scores based on AI analysis
  • Score affects access to services/benefits
  • Example: Government AI scoring citizens' trustworthiness

5.1(d): Real-Time Biometric Identification in Public Spaces

  • Live facial recognition in publicly accessible spaces
  • Exceptions: Missing persons, prevent terrorist attacks (judicial authorization required)
  • Example: Retail store using live facial recognition without consent

SignalBreak Evidence:Compliant — No workflows match prohibited practices.

Self-Certification Checklist:

QuestionYour AnswerIf "Yes" → Violation
Does any AI manipulate users below conscious awareness?❌ No
Does any AI target vulnerable groups' weaknesses?❌ No
Does any AI assign social scores affecting rights?❌ No
Does any AI perform real-time biometric identification in public?❌ No

If all "No" → Compliant with Article 5


Title III: High-Risk AI Systems

Article 6: Classification as High-Risk

Requirement: Determine whether each AI system is High-Risk based on Annex III categories or product safety laws.

Classification Methodology:

Step 1: Check if AI system is listed in Annex III

  • See "Annex III High-Risk Categories" table above
  • If listed → High-Risk (unless exception applies)

Step 2: Check if AI is a safety component of regulated product

  • Medical devices (MDR, IVDR)
  • Machinery (Machinery Regulation)
  • Toys, aviation, automotive, etc.
  • If safety-critical → High-Risk

Step 3: Apply exceptions (Article 6.3)

  • If AI performs narrow procedural task (data formatting, not decision-making) → Not High-Risk
  • If AI improves human decisions (not replacing) and low-risk → Not High-Risk

SignalBreak Evidence: 🔴 Non-Compliant — 0 of 6 workflows have formal High-Risk classification in typical implementations.

Gap: 100% of systems missing mandatory risk classification.

Action Required:Within 30 days:

  1. Create Risk Classification Register:
WorkflowAnnex III Category?Safety Component?Exception Applies?ClassificationJustification
Customer Support Chatbot❌ No❌ NoN/AMinimal-RiskInternal customer service, no decision-making affecting rights
Email Classifier❌ No❌ NoN/AMinimal-RiskData routing, no fundamental rights impact
Recruitment Screening✅ Yes (Employment)❌ No❌ No (replaces human screening)High-RiskAffects employment decisions per Annex III
Credit Scoring✅ Yes (Essential Services)❌ No❌ No (determines credit eligibility)High-RiskAffects access to financial services
Code Review Agent❌ No❌ NoN/AMinimal-RiskAssists developers, no critical decisions
Internal Chatbot❌ No❌ NoN/AMinimal-RiskInternal use, no external impact
  1. Document justification for each classification
  2. Implement High-Risk obligations for any identified High-Risk systems (Articles 9-15)

Penalty for non-classification: €15M or 3% of global turnover (failure to comply with High-Risk obligations)


Article 9: Risk Management System

Requirement: High-Risk AI systems must have a continuous risk management process throughout their lifecycle.

Risk Management Obligations:

9.2(a): Risk Identification

  • Identify known/foreseeable risks
  • Includes risks to health, safety, fundamental rights
  • Example: Bias in recruitment AI discriminating against protected groups

9.2(b): Risk Estimation and Evaluation

  • Assess likelihood and severity
  • Use documented methodology
  • Example: Probability of false positive × Impact on candidate

9.2(c): Risk Mitigation

  • Eliminate or reduce risks to acceptable level
  • Implement safeguards (human oversight, fallbacks)
  • Example: Add human review for borderline candidates

9.2(d): Residual Risk Assessment

  • Evaluate remaining risks after mitigation
  • Provide information to deployers
  • Example: Disclose known failure modes to HR team

SignalBreak Evidence: 🟡 Partial — 6 of 6 workflows have risk assessment via criticality levels (Critical, High, Medium, Low), but High-Risk AI-specific risk management incomplete.

Gap:

  • Risk mitigation controls incomplete (0 of 6 workflows have fallback mechanisms in typical state)
  • Residual risk documentation missing
  • No formal risk management process document

Action Required:For each High-Risk workflow:

  1. Create Risk Register:
Risk IDRisk DescriptionLikelihoodSeverityImpact on RightsMitigationResidual RiskOwner
RR-001Recruitment AI bias against womenMediumHighDiscrimination (Charter Article 21)Bias audit quarterly, human review all rejectionsLow (monitored)HR Director
RC-001Credit scoring false negativesLowCriticalAccess to financial servicesSecond AI model review, human appeal processMedium (acceptable)CRO
  1. Document risk management process (procedure for identifying, assessing, mitigating risks)
  2. Assign risk owner (accountable for monitoring and mitigation)
  3. Review quarterly (continuous risk management)

Audit Readiness: Risk register demonstrates compliance with Article 9 obligations.


Article 10: Data and Data Governance

Requirement: Training, validation, and testing datasets must meet data quality criteria and be managed with appropriate governance.

Data Governance Obligations:

10.2: Design Choices

  • Datasets appropriate for intended purpose
  • Representative of use cases
  • Relevant, error-free (to best knowledge)

10.3: Data Properties

  • Examine for biases
  • Identify gaps/shortcomings
  • Determine suitability despite imperfections

10.4: Data Processing

  • Appropriate measures for data quality:
    • Relevance
    • Representativeness
    • Accuracy
    • Completeness
    • Consistency

10.5: Personal Data

  • GDPR compliance for personal data processing
  • Lawful basis, purpose limitation, data minimization
  • Special categories of data (race, health, etc.) → explicit consent/legal basis

SignalBreak Evidence: 🟡 Partial — 2 of 4 providers have complete data governance documentation (OpenAI, Anthropic have public data policies; smaller providers may lack transparency).

Gap: 50% of providers missing:

  • Data retention policies
  • Data classification (PII handling)
  • GDPR compliance attestations
  • Training data provenance

Action Required:For each provider:

  1. Request Data Governance Documentation:
ProviderData SheetTraining DataGDPR ComplianceRetention PolicySignalBreak Has?
OpenAI✅ Available✅ Public (filtered internet)✅ SOC 2, DPA✅ 30-day API logsYes
Anthropic✅ Available✅ Public + licensed✅ SOC 2, DPA✅ ConfigurableYes
Ollama (self-hosted)❌ N/A (self-hosted)⚠️ User-provided⚠️ User responsibility⚠️ User-controlledPartial
Google Vertex AI✅ Available✅ Google datasets✅ ISO 27001, 27701✅ ConfigurableYes
  1. For High-Risk AI using third-party models:

    • Request model card (dataset description, known biases, performance metrics)
    • Conduct bias audit (test for disparate impact on protected groups)
    • Document data quality assessment
  2. For self-hosted/fine-tuned models:

    • Maintain dataset documentation (source, size, demographics)
    • Conduct bias analysis (check for underrepresentation)
    • Implement data quality monitoring (detect drift, label errors)

Estimated effort:

  • Provider documentation: 2-4 hours per provider (review existing docs)
  • Bias audit: 1-2 weeks per High-Risk workflow (requires statistical analysis)

Article 11: Technical Documentation

Requirement: High-Risk AI systems must have comprehensive technical documentation demonstrating compliance.

Documentation Requirements:

SectionRequired ContentSignalBreak Evidence
General descriptionSystem purpose, intended users, deployment context✅ Workflow descriptions (6/6)
Design specificationsArchitecture, algorithms, data flow🟡 Provider documentation (varies)
DatasetsTraining/validation/test data details🟡 Provider data sheets (2/4 complete)
Risk managementRisk register, mitigation measures❌ Requires Article 9 implementation
Performance metricsAccuracy, precision, recall, fairness❌ Workflow-level testing needed
Human oversightOversight measures, capabilities, limitations🟡 Human-in-loop flags (3/6 workflows)
CybersecuritySecurity measures, vulnerability assessments🟡 Provider SOC 2 attestations
Conformity assessmentTest reports, certificates (if third-party assessed)❌ Post-compliance only

SignalBreak Evidence: 🟡 Partial — Basic documentation (workflow descriptions, business context) exists, but High-Risk AI-specific technical documentation missing.

Gap:

  • No formal technical documentation package
  • Performance metrics not tracked
  • Conformity assessment not conducted

Action Required:For each High-Risk workflow:

  1. Create Technical Documentation Package (single PDF/document per workflow):

    • Section 1: General description (use SignalBreak workflow description)
    • Section 2: Design specifications (provider model card + your integration architecture)
    • Section 3: Datasets (provider data sheet OR your training data documentation)
    • Section 4: Risk management (link to Article 9 risk register)
    • Section 5: Performance metrics (accuracy test results, bias audit reports)
    • Section 6: Human oversight (describe human-in-loop procedures)
    • Section 7: Cybersecurity (provider SOC 2 + your endpoint security)
    • Section 8: Conformity assessment (if third-party assessed)
  2. Store securely (must be available to authorities for 10 years after last use)

  3. Update annually or when system changes materially

Template: Download EU AI Act Technical Documentation Template from EC website: https://ec.europa.eu/digital-strategy/our-policies/european-approach-artificial-intelligence_en

Estimated effort:

  • Initial creation: 2-4 weeks per High-Risk workflow (substantial documentation)
  • Annual update: 1-2 days per workflow

Article 12: Record-Keeping (Logging)

Requirement: High-Risk AI systems must have automatic logging capabilities to enable traceability.

Logging Requirements:

12.1: Logging Capabilities

  • Logs automatically generated and maintained
  • Ensure traceability of AI system functioning
  • Logging level appropriate to intended purpose

12.2: Minimum Retention

  • Logs retained for minimum 6 months (unless longer required by sector law)
  • Example: Financial services may require 7 years under MiFID II

12.3: Log Contents:

Event TypeWhat to LogExample
Input dataUser queries, uploaded filesCustomer question submitted to chatbot
AI outputsDecisions, recommendations, scoresRecruitment AI recommends "Reject"
Human oversightHuman interventions, overridesHR manager overrides "Reject" to "Interview"
System changesModel updates, config changesUpgraded from GPT-4 to GPT-4 Turbo

SignalBreak Evidence:Compliant — 7+ audit log entries recorded in last 30 days (workflow changes, provider binding updates).

What SignalBreak Logs:

  • Workflow creation/modification/deletion
  • Provider binding changes (model selection, fallback configuration)
  • User actions (who made changes, when)

What SignalBreak Doesn't Log:

  • Individual AI requests (your chatbot conversations)
  • AI outputs (what the AI said to users)
  • End-user interactions (customer queries)

Action Required:For High-Risk workflows:

  1. Implement application-level logging (beyond SignalBreak governance logs):

    • Log every AI request (input, output, timestamp, user ID)
    • Retain for 6+ months (EU AI Act) or sector-specific requirement
    • Encrypt logs (personal data protection)
  2. Example logging architecture:

User Request → Your Application → OpenAI API

                 Log to Database:
                 - Timestamp: 2026-01-26T10:15:30Z
                 - User ID: user_12345
                 - Input: "Analyze this resume"
                 - Output: "Candidate score: 75/100, Recommend: Interview"
                 - Model: gpt-4-turbo
                 - Human Override: None
                 - Retention: 6 months from creation
  1. Implement log retention policy:
    • Automatic deletion after retention period
    • Backup for audit (authorities can request logs)
    • GDPR-compliant (purpose limitation, data minimization)

Tools:

  • Cloud logging: AWS CloudWatch, Google Cloud Logging, Azure Monitor
  • Self-hosted: ELK Stack (Elasticsearch, Logstash, Kibana), Graylog

Estimated effort:

  • Implementation: 1-2 weeks (application logging integration)
  • Ongoing: £500-2k/month (log storage costs for high-volume AI)

Article 13: Transparency and Information to Users

Requirement: Users must be informed they are interacting with a High-Risk AI system and understand its capabilities/limitations.

Transparency Obligations:

13.1: User Disclosure

  • Inform users AI system is being used
  • Explain system's purpose and capabilities
  • Disclose limitations and conditions where system may underperform

13.2: Information Content:

What to DiscloseExample
Purpose"This AI screens resumes to identify suitable candidates"
How it works"AI analyses keywords, experience, education"
Capabilities"Can process 1,000 resumes in 10 minutes"
Limitations"May not understand unconventional career paths"
When it may fail"Less accurate for non-English resumes"
Human oversight"All AI recommendations reviewed by HR manager"

13.3: Target Audience

  • Tailor disclosure to deployer (if B2B) or end user (if B2C)
  • Language: Clear, concise, appropriate for audience

SignalBreak Evidence:Compliant — 2 chatbot/agent workflows identified, 6 of 6 workflows have transparency documentation (business context).

Gap: User disclosure mechanisms not implemented (no front-end banners, terms of service mentions).

Action Required:For High-Risk workflows:

  1. Implement user disclosure:

Example: Recruitment AI

╔═══════════════════════════════════════════════════════╗
║  🤖 AI-Assisted Recruitment                          ║
║                                                       ║
║  Your application will be screened by an AI system   ║
║  to identify suitable candidates. A human HR team    ║
║  member will review all AI recommendations before    ║
║  making final decisions.                             ║
║                                                       ║
║  Learn more: [Link to AI Policy]                     ║
╚═══════════════════════════════════════════════════════╝

Example: Credit Scoring AI

Your credit application will be assessed using automated decision-making.
The AI considers [list factors: income, credit history, etc.].
You have the right to:
- Request human review of the decision (GDPR Article 22)
- Access the logic behind the decision
- Contest the decision

Contact us: ai-decisions@company.com
  1. Update Terms of Service:

    • Add "AI Use Disclosure" section
    • List High-Risk AI systems
    • Explain rights (GDPR Article 22 right to human review)
  2. Create AI Transparency Page:

    • Public-facing page explaining AI use
    • Link from user-facing applications
    • Update annually or when AI changes

Estimated effort:

  • Implementation: 1-2 days (front-end banners, legal review)
  • Legal review: £1k-3k (external counsel for T&C updates)

Article 14: Human Oversight

Requirement: High-Risk AI systems must be designed for effective human oversight to prevent/minimize risks.

Human Oversight Requirements:

14.1: Design for Oversight

  • System enables human oversight measures
  • Humans can:
    • Fully understand AI capabilities/limitations
    • Monitor AI operation
    • Interpret AI outputs
    • Intervene or interrupt AI (stop button)
    • Disregard, override, or reverse AI outputs

14.2: Oversight Measures:

MeasureDescriptionExample
Identify risksHumans detect anomalies, errorsHR manager notices AI rejecting all candidates over 50
Stop systemAbility to halt AI operationEmergency stop for autonomous vehicle AI
Override outputsChange AI decisionCredit officer overrides AI "Reject" decision
Competent humansTrained, knowledgeable usersHR staff trained on AI bias, limitations

14.3: Limits on Automation

  • Humans not overly reliant on AI
  • No "automation bias" (blind trust in AI)
  • Meaningful human oversight (not rubber-stamping)

SignalBreak Evidence: 🟡 Partial — 2 of 2 Critical workflows have human-in-the-loop controls, 3 of 6 total workflows with human oversight.

Gap:

  • Only 50% of workflows have human oversight enabled
  • No documentation of oversight procedures
  • No training records for human overseers

Action Required:For each High-Risk workflow:

  1. Enable Human-in-Loop:

    • Set human_in_loop: true in SignalBreak workflow
    • Document oversight procedure
  2. Create Oversight Procedure:

Template: Human Oversight Procedure

Workflow: Recruitment Resume Screening AI
Human Overseer: HR Manager (Jane Smith)

Oversight Measures:
1. Review: HR Manager reviews ALL AI recommendations (Accept/Reject)
2. Override Authority: HR Manager can accept AI-rejected candidates
3. Stop Condition: If AI rejects >90% of candidates, halt screening and investigate
4. Training: HR Manager completed "AI Bias Awareness" training (Annual)
5. Escalation: Unusual patterns escalated to HR Director

Override Criteria:
- Candidate has unique skills not recognized by AI
- AI may have discriminated based on protected characteristic
- Human judgment indicates AI error

Monitoring:
- Weekly: Review override rate (target <10% of candidates)
- Monthly: Analyze AI performance vs. human decisions
- Quarterly: Retrain AI if drift detected
  1. Train Human Overseers:

    • AI limitations and failure modes
    • Bias awareness (protected characteristics)
    • When to override AI decisions
    • Record training attendance (audit evidence)
  2. Monitor Override Rates:

    • Track how often humans override AI
    • Low override rate (<5%) → automation bias risk
    • High override rate (>30%) → AI underperforming, retrain

Estimated effort:

  • Procedure creation: 1-2 days per High-Risk workflow
  • Training: 2-4 hours per overseer (initial), 1 hour annual refresher
  • Monitoring: 2 hours/month per workflow

Article 15: Accuracy, Robustness, and Cybersecurity

Requirement: High-Risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity.

Accuracy Requirements:

15.1: Appropriate Accuracy

  • Performance levels appropriate to intended purpose
  • Eliminate/reduce risks to health, safety, fundamental rights
  • Trade-offs: Accuracy vs. fairness (bias mitigation may reduce accuracy)

15.2: Metrics and Thresholds:

MetricDefinitionExample Threshold
Accuracy% of correct predictionsRecruitment AI: >85% accuracy on test set
Precision% of positive predictions that are correctFraud detection: >90% precision (low false positives)
Recall% of actual positives correctly identifiedMedical diagnosis: >95% recall (catch all diseases)
FairnessDisparate impact across groups<10% difference in acceptance rate by gender

Robustness Requirements:

15.3: Resilience

  • Resistant to errors, faults, inconsistencies
  • Handles edge cases gracefully
  • Example: AI chatbot doesn't crash on unusual inputs

15.4: Adversarial Robustness

  • Resistant to manipulation attempts
  • Example: Recruitment AI detects resume keyword stuffing

Cybersecurity Requirements:

15.5: Security Measures

  • Protection against unauthorized access
  • Data poisoning defenses (malicious training data)
  • Model stealing prevention

SignalBreak Evidence:Non-Compliant — No workflow-level accuracy/robustness testing documented.

Gap:

  • No performance metrics tracked
  • No adversarial testing conducted
  • Provider security (SOC 2) documented, but workflow-level security not assessed

Action Required:For each High-Risk workflow:

  1. Define Performance Thresholds:
WorkflowMetricThresholdRationale
Recruitment AIAccuracy>85%Balance accuracy vs. fairness
Recruitment AIFairness (gender)<10% disparityLegal requirement (Equality Act 2010)
Credit ScoringPrecision>90%Minimize false rejections (customer satisfaction)
Credit ScoringRecall>80%Minimize false approvals (credit risk)
  1. Conduct Performance Testing:

    • Create test dataset (representative of real users)
    • Run AI predictions on test set
    • Calculate metrics (accuracy, precision, recall, fairness)
    • Document results in Technical Documentation (Article 11)
  2. Implement Monitoring:

    • Track performance metrics in production
    • Alert if metrics drop below thresholds
    • Retrain model if performance degrades
  3. Adversarial Testing:

    • Red team: Attempt to manipulate AI outputs
    • Example: Recruitment AI — submit keyword-stuffed resume
    • Document vulnerabilities and mitigations
  4. Cybersecurity Assessment:

    • Penetration testing of AI endpoints
    • Access control review (who can modify AI?)
    • Data encryption (training data, model weights, logs)

Tools:

  • Performance monitoring: ML ops platforms (MLflow, Weights & Biases, Kubeflow)
  • Fairness testing: Fairlearn (Microsoft), AI Fairness 360 (IBM), Aequitas
  • Adversarial testing: CleverHans, Adversarial Robustness Toolbox (ART)

Estimated effort:

  • Initial testing: 2-4 weeks per High-Risk workflow
  • Ongoing monitoring: £500-2k/month (tooling + staff time)
  • Annual security assessment: £5k-15k (external pen test)

Title IV: Transparency Obligations for Limited-Risk AI

Article 52: Transparency Obligations for Certain AI Systems

Requirement: Limited-risk AI systems (chatbots, emotion recognition, biometric categorization, deepfakes) must disclose AI use to users.

Limited-Risk Categories:

52.1: Chatbots and Conversational AI

  • Users must be informed they are interacting with AI
  • Exception: Obvious from context (e.g., branded AI assistant)
  • Penalty: €7.5M or 1.5% of global turnover

52.2: Emotion Recognition Systems

  • Inform users when AI detects emotions
  • Example: Call center AI analyzing customer frustration

52.3: Biometric Categorization

  • Inform users when AI infers demographics (age, gender, race)
  • Example: Retail AI analyzing shopper demographics

52.4: AI-Generated Content (Deepfakes)

  • Watermark AI-generated images, video, audio
  • Disclose AI use prominently
  • Exception: Creative works (art, satire) with disclosure

SignalBreak Evidence: 🟡 Partial — 2 chatbot/agent workflows identified, but user disclosure mechanisms not fully implemented.

Gap:

  • No front-end disclosure banners
  • No watermarking for AI-generated content
  • Terms of Service don't mention AI use

Action Required:For chatbots:

  1. Implement disclosure banner:
html
<!-- Example: Front-end chatbot widget -->
<div class="ai-disclosure-banner">
  <span>🤖</span> You're chatting with an AI assistant.
  <a href="/ai-policy">Learn more</a>
</div>
  1. Exception assessment:
    • Is AI use "obvious from context"?
    • If branded "AI Assistant" → may not need disclosure
    • If looks like human (no branding) → disclosure required

For emotion recognition:

  1. Add disclosure:
    • Before analysis: "This call may be analyzed for quality assurance and customer satisfaction" (mention AI if used)
    • During call: "Your tone indicates frustration. Would you like to speak to a supervisor?"

For AI-generated content:

  1. Implement watermarking:
    • Images: Embed metadata (EXIF tag: "AI-generated")
    • Video: C2PA standard (Coalition for Content Provenance and Authenticity)
    • Audio: Disclose in description/caption

Estimated effort:

  • Chatbot disclosure: 1-2 hours (front-end banner)
  • Emotion recognition: 1 day (legal review, user notification)
  • Watermarking: 1-2 weeks (implement C2PA standard)

Title VIII: Post-Market Monitoring

Article 72: Post-Market Monitoring System

Requirement: High-Risk AI providers must establish post-market monitoring to collect and analyze data on AI performance after deployment.

Post-Market Monitoring Obligations:

72.1: Monitoring Plan

  • Document systematic procedure for monitoring
  • Collect data on AI performance in real-world use
  • Identify trends indicating safety/rights risks

72.2: Data Collection:

Data TypeWhat to CollectExample
Performance metricsAccuracy, precision, recall in productionRecruitment AI: 82% accuracy (below 85% threshold)
IncidentsFailures, errors, unexpected behaviorCredit scoring AI: 15 false rejections last month
User feedbackComplaints, concerns, suggestionsCustomers report chatbot gives incorrect tax advice
Context changesDrift in user population, use casesRecruitment AI now used for senior roles (originally designed for entry-level)

72.3: Serious Incidents (Article 73)

  • Report serious incidents to authorities within 15 days
  • Serious incident: Death, serious health damage, fundamental rights violation
  • Example: Biometric AI wrongly identifies person as terrorist → arrest

SignalBreak Evidence: 🟡 Partial — Audit logging in place (Article 12), provider health monitoring active (5,000+ signals), but systematic post-market monitoring procedures not established.

Gap:

  • No formal post-market monitoring plan
  • No incident reporting procedure
  • Provider monitoring (external) exists, but workflow performance monitoring (internal) missing

Action Required:For each High-Risk workflow:

  1. Create Post-Market Monitoring Plan:

Template:

Workflow: Recruitment Resume Screening AI
Monitoring Responsible: HR Director

Data Collection:
- Daily: Accuracy metrics (from application logs)
- Weekly: User feedback (HR team surveys)
- Monthly: Bias analysis (gender/age/race disparities)
- Quarterly: Performance review vs. initial test results

Incident Definition:
- Serious: AI discriminates against protected group (e.g., rejects all candidates over 50)
- Moderate: Accuracy drops below 85% threshold
- Minor: Isolated errors (1-2 incorrect scores per week)

Escalation:
- Serious → Report to authorities within 15 days + halt system
- Moderate → Investigate root cause, retrain model if needed
- Minor → Log and monitor for patterns

Review Frequency:
- Monthly: Review monitoring data
- Quarterly: Update monitoring plan if context changes
  1. Implement Incident Reporting:

    • Create incident response playbook
    • Train staff on serious incident criteria
    • Designate authority contact (EU Member State AI authority)
  2. Establish Feedback Loop:

    • Collect user feedback (HR team, end users)
    • Analyze feedback for patterns
    • Update AI system based on learnings (continuous improvement)

Estimated effort:

  • Plan creation: 1-2 days per High-Risk workflow
  • Ongoing monitoring: 4-8 hours/month per workflow (data analysis, reporting)

EU AI Act Compliance Roadmap

Immediate (Next 30 Days) — Critical

Priority 1: Risk Classification (Article 6)

Action: Classify all AI workflows as Prohibited/High-Risk/Limited-Risk/Minimal-Risk

Steps:

  1. Review all workflows against Annex III categories
  2. Create Risk Classification Register (see Article 6 template above)
  3. Document justification for each classification
  4. Identify High-Risk workflows requiring full compliance

Deliverable: Risk Classification Register (PDF/spreadsheet)

Owner: Chief AI Officer, Legal Counsel

Effort: 1-2 weeks (includes legal review)


Priority 2: Prohibited AI Audit (Article 5)

Action: Verify no workflows violate prohibited AI practices

Steps:

  1. Review Article 5 prohibited practices (see table above)
  2. Self-certify compliance (all "No" answers)
  3. Document audit results

Deliverable: Article 5 Compliance Certification (1-page doc)

Owner: Compliance Officer

Effort: 2-4 hours


Priority 3: Data Governance Documentation (Article 10)

Action: Complete data governance documentation for all providers

Steps:

  1. Request data sheets from providers (OpenAI, Anthropic, etc.)
  2. For High-Risk AI: Conduct bias audit on training data
  3. Document GDPR compliance for personal data processing

Deliverable: Provider Data Governance Package (per provider)

Owner: Data Protection Officer (DPO)

Effort: 1-2 weeks (depends on provider responsiveness)


Short-Term (Next 90 Days) — Important

Priority 4: Implement Fallback Mechanisms (Article 9)

Action: Configure fallback providers for all Critical and High-Risk workflows

Steps:

  1. Add fallback provider bindings in SignalBreak (see Article 9 risk mitigation)
  2. Test failover procedures (simulate primary provider outage)
  3. Document fallback strategy in Risk Register

Deliverable: Fallback configuration + test results

Owner: Engineering Manager

Effort: 1-2 weeks (implementation + testing)


Priority 5: User Disclosure Procedures (Article 13, Article 52)

Action: Implement user disclosure for chatbot/agent systems and High-Risk AI

Steps:

  1. Add front-end disclosure banners (see Article 13 examples)
  2. Update Terms of Service (AI Use Disclosure section)
  3. Create AI Transparency Page (public-facing)

Deliverable: Disclosure implementation + legal T&C update

Owner: Product Manager, Legal Counsel

Effort: 1-2 weeks (front-end dev + legal review)


Priority 6: Post-Market Monitoring Framework (Article 72)

Action: Establish post-market monitoring for High-Risk AI

Steps:

  1. Create Post-Market Monitoring Plan (see Article 72 template)
  2. Implement monitoring dashboards (accuracy, feedback, incidents)
  3. Train staff on incident reporting (serious incidents → 15-day deadline)

Deliverable: Post-Market Monitoring Plan + dashboards

Owner: AI Operations Lead

Effort: 2-4 weeks (tooling + process documentation)


Medium-Term (Next 180 Days) — Comprehensive

Priority 7: Technical Documentation Packages (Article 11)

Action: Create comprehensive technical documentation for High-Risk AI

Steps:

  1. Use EU AI Act Technical Documentation Template (EC website)
  2. Complete all 8 sections (see Article 11 requirements)
  3. Store securely (10-year retention requirement)

Deliverable: Technical Documentation Package (per High-Risk workflow)

Owner: AI Product Manager, Technical Writer

Effort: 2-4 weeks per High-Risk workflow


Priority 8: Quality Management System (Article 17)

Action: Implement quality management system (QMS) for High-Risk AI

Steps:

  1. Adopt existing QMS framework (ISO 9001, ISO 13485) or create EU AI Act-specific QMS
  2. Document quality policies, procedures, controls
  3. Conduct internal audits (quarterly)

Deliverable: QMS documentation + audit reports

Owner: Quality Manager, Compliance Officer

Effort: 2-3 months (substantial organizational change)


Priority 9: Conformity Assessment (Article 43)

Action: Conduct conformity assessment for High-Risk AI (self-assessment or third-party)

Steps:

  1. Self-assessment: Internal audit against all EU AI Act requirements (Articles 9-15)
  2. Third-party assessment: Engage Notified Body (for Annex III listed systems) or accredited assessor
  3. Issue Declaration of Conformity (DoC)

Deliverable: Conformity Assessment Report + Declaration of Conformity

Owner: Compliance Officer, External Assessor

Effort: 1-2 months (self-assessment), 3-6 months (third-party)

Cost: £0 (self), £20k-60k (third-party)


Regulatory Risk Assessment

Current Risk Exposure (Typical Organization Without Compliance)

Critical Compliance Gaps:

GapArticlePenalty ExposureLikelihood (if audited)
No High-Risk classificationArticle 6€15M or 3% turnover90% (easy to detect)
Missing risk managementArticle 9€15M or 3% turnover80% (high-risk AI without risk register)
Incomplete data governanceArticle 10€15M or 3% turnover70% (provider docs exist, but bias audit missing)
No technical documentationArticle 11€15M or 3% turnover60% (some docs exist, but not EU AI Act format)

Total Worst-Case Exposure: €60M or 12% of global annual turnover (if multiple violations)

Realistic First-Audit Penalty: €1M-5M (authorities typically issue warnings first, fines for repeated violations)


Probability of Audit

Enforcement Timeline:

PhaseTimingAudit Probability
Grace periodFeb 2025 - Aug 2026Low (5-10%) — authorities focused on education
Initial enforcementAug 2026 - Aug 2027Medium (20-40%) — spot checks, complaint-driven
Steady stateAug 2027+High (50-70%) for high-risk AI in regulated sectors

Audit Triggers:

TriggerProbabilityExample
User complaintHighCandidate reports discriminatory recruitment AI
WhistleblowerMediumEmployee reports non-compliant AI to authorities
Sector sweepMediumFinancial regulator audits all banks' credit scoring AI
Random selectionLowStatistical sampling by Member State authority
Serious incidentVery HighAI causes harm → mandatory investigation

Highest-Risk Sectors for Audit:

  1. Finance (credit scoring, fraud detection)
  2. Employment (recruitment, HR analytics)
  3. Healthcare (diagnostic AI, treatment recommendations)
  4. Law enforcement (predictive policing, case analysis)
  5. Education (student assessment, admissions)

Estimated Compliance Investment

Cost Breakdown:

ActivityInternal EffortExternal CostsTimeline
Legal counsel (EU AI Act specialist)20-40 hours£10k-25kOngoing (initial + annual)
Risk classification40-80 hours£030 days
Data governance40-80 hours£5k-15k (bias audits)90 days
Technical documentation80-160 hours£0-10k (templates, tools)180 days
Human oversight implementation80-160 hours£10k-30k (training, tooling)90 days
Post-market monitoring40-80 hours£5k-15k (dashboards, analytics)90 days
Conformity assessment80-160 hours£20k-60k (third-party assessor)180 days

Total Estimated Investment:

Organization SizeInternal HoursExternal CostsTotal (loaded cost)
SME (<100 employees, 5-10 AI workflows)400-800 hours£50k-155k£100k-£250k
Mid-market (100-1,000 employees, 10-30 workflows)800-1,600 hours£100k-300k£200k-£500k
Enterprise (1,000+ employees, 30+ workflows)1,600-3,200 hours£200k-600k£400k-£1M

Assumptions:

  • Loaded internal cost: £100/hour (salary + overhead)
  • External specialist rates: £200-400/hour (legal, consulting)

Next Steps

  1. Legal Review: Engage EU AI Act specialist counsel within 7 days

    • Validate risk classification approach
    • Review compliance gaps specific to your industry
    • Assess liability exposure
  2. Risk Classification Workshop: Complete systematic review within 21 days

    • Convene cross-functional team (Legal, Compliance, Engineering, Product)
    • Map all AI workflows to Annex III categories
    • Document High-Risk systems requiring full compliance
  3. Documentation Remediation: Address critical gaps within 30 days

    • Complete provider data governance documentation
    • Self-certify Article 5 prohibited AI compliance
    • Create Risk Classification Register
  4. Technical Controls: Implement required safeguards within 60-90 days

    • Fallback mechanisms for High-Risk workflows
    • User disclosure for chatbots and High-Risk AI
    • Logging and record-keeping systems
  5. Ongoing Monitoring: Establish quarterly compliance assessment schedule

    • Post-market monitoring reviews
    • Incident reporting drills
    • Annual technical documentation updates

Common Questions

Does the EU AI Act apply to my organization if I'm based outside the EU?

Yes, if:

  • ✅ You provide AI systems placed on the EU market (sold to EU customers)
  • ✅ Your AI outputs are used in the EU (even if system hosted elsewhere)
  • ✅ You're a deployer using third-party AI for EU operations

No, if:

  • ❌ You only operate in non-EU markets with no EU customers
  • ❌ Your AI is used exclusively for military, defense, national security (Article 2.3 exemption)
  • ❌ Your AI is for research/development only (not placed on market)

Example:

  • US SaaS company with EU customers → Covered
  • UK fintech using AI for UK-only lending → Not covered (but may be if UK customers apply to EU bank)
  • Indian outsourcing firm providing AI services to EU client → Covered (deployer responsibility, but provider obligations apply to you)

Can SignalBreak alone get me EU AI Act compliant?

No. SignalBreak provides ~50-60% of evidence, but EU AI Act requires:

What SignalBreak provides:

  • ✅ AI system inventory (Article 11)
  • ✅ Provider monitoring (Article 72 post-market monitoring)
  • ✅ Audit logs (Article 12 record-keeping)
  • ✅ Workflow categorization (helps with Article 6 classification)

What you still need:

  • Formal High-Risk classification (Article 6) — Legal review required
  • Risk management system (Article 9) — Risk register, mitigation plans
  • Data governance (Article 10) — Bias audits, GDPR compliance
  • Technical documentation (Article 11) — Comprehensive docs per EU format
  • User disclosure (Article 13, 52) — Front-end banners, T&C updates
  • Conformity assessment (Article 43) — Self-assessment or third-party audit

Analogy: SignalBreak is like GitHub for EU AI Act — it tracks your AI systems and changes, but you still need compliance processes and legal review.


What happens if I classify my AI as "Minimal-Risk" and authorities disagree?

Risk: If authorities determine your AI is actually High-Risk (Annex III listed), you could face:

  • €15M or 3% of global turnover (non-compliance with High-Risk obligations)
  • Retroactive compliance requirements (implement Articles 9-15 immediately)
  • Market ban (until compliance demonstrated)

Mitigation:

  1. Conservative classification: When in doubt, classify as High-Risk
  2. Legal review: Engage EU AI Act specialist to validate classification
  3. Document justification: Explain why Minimal-Risk (burden of proof on you)
  4. Periodic review: Re-classify as business context changes

Gray Area Example:

  • Recruitment AI screening resumes (NOT final hiring decision)
    • High-Risk interpretation: Annex III #4 "Employment" includes resume screening (affects access to employment)
    • Minimal-Risk interpretation: Only affects candidate pool, humans make final decision (not safety component)
    • Recommendation: Classify as High-Risk (conservative approach)

Best Practice: Engage Member State authority for advance guidance (some countries offer "sandbox" programs for compliance support).


How does GDPR relate to the EU AI Act?

Relationship:

LawScopeFocus
GDPRPersonal data processingPrivacy, data protection rights
EU AI ActAI systems (regardless of personal data)Safety, fundamental rights, trustworthiness

Overlap:

AI Act ProvisionGDPR Connection
Article 10 (Data Governance)Must comply with GDPR for personal data in training datasets
Article 13 (Transparency)Builds on GDPR Article 13-14 (right to information)
Article 22 GDPR (Automated Decisions)High-Risk AI involving automated decisions = GDPR Article 22 compliance required

Complementary Obligations:

  • GDPR: Right to human review of automated decision (Article 22.3)
  • EU AI Act: Human oversight for High-Risk AI (Article 14)
  • Practical result: High-Risk AI involving personal data requires BOTH GDPR + EU AI Act compliance

Example: Recruitment AI

  • EU AI Act: Classify as High-Risk (Annex III #4), implement Articles 9-15
  • GDPR: Provide candidates with right to contest automated decision, explain logic (Article 22)
  • Combined: Human oversight (EU AI Act) + right to human review (GDPR)

What's the timeline for authorities to start enforcing the EU AI Act?

Enforcement Phases:

PhaseTimelineAuthority FocusPenalty Likelihood
Grace periodFeb 2025 - Aug 2026Education, guidance, warningsLow (warnings, no fines)
Initial enforcementAug 2026 - Aug 2027Complaint-driven audits, sector sweepsMedium (first fines for egregious violations)
Full enforcementAug 2027+Proactive audits, annual inspectionsHigh (systematic enforcement)

Member State Readiness:

  • Germany, France, Netherlands: Likely early enforcers (strong regulatory capacity)
  • Southern/Eastern Europe: May lag (resource constraints)
  • Commission role: Can initiate infringement proceedings against Member States for under-enforcement

Practical Advice:

  • By Aug 2026: High-Risk AI must be compliant (24-month grace period from Feb 2025)
  • By Aug 2027: All provisions enforceable (authorities can audit any time)
  • Best practice: Achieve compliance by Q2 2026 to avoid rushed implementation


External Resources


Last updated: 2026-01-26Based on: Regulation (EU) 2024/1689 (August 1, 2024), Phased Implementation Timeline

⚠️ Legal Disclaimer: This guide provides general information about the EU Artificial Intelligence Act. It does not constitute legal advice or regulatory compliance certification. The EU AI Act imposes significant penalties for non-compliance (up to €35 million or 7% of global annual turnover, whichever is higher). Consult qualified legal counsel and EU regulatory experts for comprehensive compliance guidance specific to your organization and jurisdiction.

AI Governance Intelligence