All 27 compliance obligations under the EU AI Act (Regulation EU 2024/1689), mapped by article, effort, and deadline. Enforcement begins August 2, 2026.
Check your specific AI systems
Get a personalised compliance roadmap — takes 5 minutes, free.
Organisations deploying AI systems in regulated sectors must meet all 12 of these obligations before placing their system on the market.
Establish, implement, document and maintain a risk management system throughout the AI lifecycle. Must identify and analyse known and foreseeable risks, estimate and evaluate risks that may emerge, and adopt risk management measures.
Deadline: Before deploymentTraining, validation and testing data must meet quality criteria. Must address relevant design choices, data collection processes, data preparation operations, and examination for biases.
Deadline: Before deploymentPrepare comprehensive technical documentation before placing on market. Must include general description, detailed description of elements and development process, monitoring/functioning/control information, and validation/testing details.
Deadline: Before deploymentAutomatically log events throughout the lifecycle. Logs must enable monitoring of operation, facilitate post-market monitoring, and support investigation of incidents.
Deadline: Before deploymentEnsure AI system is sufficiently transparent. Provide instructions for use including identity of provider, system capabilities and limitations, performance metrics, known risks, and human oversight measures.
Deadline: Before deploymentDesign and develop systems to allow effective human oversight. Humans must be able to fully understand capabilities and limitations, monitor operation, interpret outputs, and override/interrupt/stop the system.
Deadline: Before deploymentAchieve appropriate levels of accuracy, robustness, and cybersecurity. Must be resilient to errors, faults, inconsistencies, and adversarial attacks.
Deadline: Before deploymentConduct conformity assessment before placing on market. Most high-risk AI systems require internal conformity assessment; some require third-party assessment (biometrics, critical infrastructure).
Deadline: Before deploymentDraw up written EU declaration of conformity for each high-risk AI system. Must contain information in Annex V and be updated as necessary.
Deadline: Before deploymentAffix CE marking to high-risk AI systems or their documentation to indicate compliance with the regulation.
Deadline: Before deploymentRegister high-risk AI system in the EU database before placing on market. Deployers of certain high-risk AI systems must also register.
Deadline: Before deploymentEstablish and document a post-market monitoring system. Collect, document, and analyse data on performance throughout the lifetime of the system.
Deadline: Ongoing after deploymentAll providers and deployers of AI systems — not just high-risk — must comply with these baseline obligations.
Providers of high-risk AI systems must implement a quality management system covering strategy, design, development, testing, post-market monitoring, and documentation.
Deadline: Before deploymentKeep technical documentation and quality management records for at least 10 years after the AI system is placed on the market or put into service.
Deadline: OngoingTake immediate corrective actions for non-compliant AI systems and inform distributors and deployers. Report serious incidents to relevant market surveillance authorities.
Deadline: Ongoing after deploymentCooperate with competent national authorities upon request. Provide all necessary information and documentation to demonstrate conformity with the regulation.
Deadline: OngoingDeployers must use AI systems in accordance with instructions, assign human oversight to competent persons, monitor operation, and report incidents to the provider.
Deadline: Before deploymentDeployers of certain high-risk AI systems must conduct a fundamental rights impact assessment before deployment. Register assessment results in the EU database.
Deadline: Before deploymentOrganisations that provide general-purpose AI models (including fine-tuned versions) have additional documentation and transparency duties.
Providers of general-purpose AI models must draw up and maintain technical documentation including training process, data used, evaluation results, and energy consumption.
Deadline: Before making availableProvide information and documentation to downstream providers to enable compliance. Must include capabilities, limitations, and integration guidance.
Deadline: Before making availableEstablish a policy to comply with EU copyright law, including reservations of rights by rightsholders under Article 4(3) of Directive 2019/790.
Deadline: Before making availablePublish a sufficiently detailed summary of the content used for training the GPAI model, in accordance with templates provided by the AI Office.
Deadline: Before making availableProviders of GPAI models with systemic risk must conduct adversarial testing to identify and mitigate systemic risks at EU level.
Deadline: OngoingReport serious incidents and possible corrective measures to the AI Office without undue delay after becoming aware of them.
Deadline: Ongoing after deploymentAI systems that interact with users (chatbots, emotion recognition, synthetic media) must meet these minimum transparency requirements.
Users interacting with AI chatbots must be informed they are interacting with an AI system, unless it is obvious from context.
Deadline: Immediately upon deploymentAI-generated or manipulated image, audio or video content (deepfakes) must be disclosed as artificially generated or manipulated.
Deadline: Immediately upon deploymentUsers must be informed when they are subject to emotion recognition or biometric categorisation systems.
Deadline: Immediately upon deployment