Updated April 2026 · Regulation (EU) 2024/1689

EU AI Act Compliance Checklist

All 27 compliance obligations under the EU AI Act (Regulation EU 2024/1689), mapped by article, effort, and deadline. Enforcement begins August 2, 2026.

Check your specific AI systems

Get a personalised compliance roadmap — takes 5 minutes, free.

Start free assessment →

High-Risk AI Systems (Annex III)

Organisations deploying AI systems in regulated sectors must meet all 12 of these obligations before placing their system on the market.

Article 9Risk Management System
HIGH effort

Establish, implement, document and maintain a risk management system throughout the AI lifecycle. Must identify and analyse known and foreseeable risks, estimate and evaluate risks that may emerge, and adopt risk management measures.

Deadline: Before deployment
Article 10Data and Data Governance
HIGH effort

Training, validation and testing data must meet quality criteria. Must address relevant design choices, data collection processes, data preparation operations, and examination for biases.

Deadline: Before deployment
Article 11 + Annex IVTechnical Documentation
HIGH effort

Prepare comprehensive technical documentation before placing on market. Must include general description, detailed description of elements and development process, monitoring/functioning/control information, and validation/testing details.

Deadline: Before deployment
Article 12Record-keeping and Logging
MEDIUM effort

Automatically log events throughout the lifecycle. Logs must enable monitoring of operation, facilitate post-market monitoring, and support investigation of incidents.

Deadline: Before deployment
Article 13Transparency and Information
MEDIUM effort

Ensure AI system is sufficiently transparent. Provide instructions for use including identity of provider, system capabilities and limitations, performance metrics, known risks, and human oversight measures.

Deadline: Before deployment
Article 14Human Oversight
HIGH effort

Design and develop systems to allow effective human oversight. Humans must be able to fully understand capabilities and limitations, monitor operation, interpret outputs, and override/interrupt/stop the system.

Deadline: Before deployment
Article 15Accuracy, Robustness and Cybersecurity
HIGH effort

Achieve appropriate levels of accuracy, robustness, and cybersecurity. Must be resilient to errors, faults, inconsistencies, and adversarial attacks.

Deadline: Before deployment
Article 43Conformity Assessment
HIGH effort

Conduct conformity assessment before placing on market. Most high-risk AI systems require internal conformity assessment; some require third-party assessment (biometrics, critical infrastructure).

Deadline: Before deployment
Article 47EU Declaration of Conformity
LOW effort

Draw up written EU declaration of conformity for each high-risk AI system. Must contain information in Annex V and be updated as necessary.

Deadline: Before deployment
Article 48CE Marking
LOW effort

Affix CE marking to high-risk AI systems or their documentation to indicate compliance with the regulation.

Deadline: Before deployment
Article 49Registration in EU Database
LOW effort

Register high-risk AI system in the EU database before placing on market. Deployers of certain high-risk AI systems must also register.

Deadline: Before deployment
Article 72Post-market Monitoring
MEDIUM effort

Establish and document a post-market monitoring system. Collect, document, and analyse data on performance throughout the lifetime of the system.

Deadline: Ongoing after deployment

General Provider Obligations (Articles 16–27)

All providers and deployers of AI systems — not just high-risk — must comply with these baseline obligations.

Article 17Quality Management System
HIGH effort

Providers of high-risk AI systems must implement a quality management system covering strategy, design, development, testing, post-market monitoring, and documentation.

Deadline: Before deployment
Article 18Documentation Retention
LOW effort

Keep technical documentation and quality management records for at least 10 years after the AI system is placed on the market or put into service.

Deadline: Ongoing
Article 20Corrective Actions and Incident Reporting
MEDIUM effort

Take immediate corrective actions for non-compliant AI systems and inform distributors and deployers. Report serious incidents to relevant market surveillance authorities.

Deadline: Ongoing after deployment
Article 21Cooperation with Authorities
LOW effort

Cooperate with competent national authorities upon request. Provide all necessary information and documentation to demonstrate conformity with the regulation.

Deadline: Ongoing
Article 26Deployer Obligations
MEDIUM effort

Deployers must use AI systems in accordance with instructions, assign human oversight to competent persons, monitor operation, and report incidents to the provider.

Deadline: Before deployment
Article 27Fundamental Rights Impact Assessment
HIGH effort

Deployers of certain high-risk AI systems must conduct a fundamental rights impact assessment before deployment. Register assessment results in the EU database.

Deadline: Before deployment

GPAI Model Obligations (Articles 53–55)

Organisations that provide general-purpose AI models (including fine-tuned versions) have additional documentation and transparency duties.

Article 53(1)(a)GPAI Technical Documentation
HIGH effort

Providers of general-purpose AI models must draw up and maintain technical documentation including training process, data used, evaluation results, and energy consumption.

Deadline: Before making available
Article 53(1)(b)GPAI Instructions for Use
MEDIUM effort

Provide information and documentation to downstream providers to enable compliance. Must include capabilities, limitations, and integration guidance.

Deadline: Before making available
Article 53(1)(c)Copyright Compliance Policy
MEDIUM effort

Establish a policy to comply with EU copyright law, including reservations of rights by rightsholders under Article 4(3) of Directive 2019/790.

Deadline: Before making available
Article 53(1)(d)Training Data Summary
MEDIUM effort

Publish a sufficiently detailed summary of the content used for training the GPAI model, in accordance with templates provided by the AI Office.

Deadline: Before making available
Article 55(1)(a)Systemic Risk Evaluation
HIGH effort

Providers of GPAI models with systemic risk must conduct adversarial testing to identify and mitigate systemic risks at EU level.

Deadline: Ongoing
Article 55(1)(c)GPAI Incident Reporting
MEDIUM effort

Report serious incidents and possible corrective measures to the AI Office without undue delay after becoming aware of them.

Deadline: Ongoing after deployment

Limited Risk — Transparency Obligations (Article 50)

AI systems that interact with users (chatbots, emotion recognition, synthetic media) must meet these minimum transparency requirements.

Article 50(1)Chatbot Disclosure
LOW effort

Users interacting with AI chatbots must be informed they are interacting with an AI system, unless it is obvious from context.

Deadline: Immediately upon deployment
Article 50(4)Synthetic Content Labelling
LOW effort

AI-generated or manipulated image, audio or video content (deepfakes) must be disclosed as artificially generated or manipulated.

Deadline: Immediately upon deployment
Article 50(3)Emotion Recognition Disclosure
LOW effort

Users must be informed when they are subject to emotion recognition or biometric categorisation systems.

Deadline: Immediately upon deployment
Disclaimer: This checklist is for informational purposes and does not constitute legal advice. Consult a qualified legal professional for advice specific to your organisation and AI systems.