Regulation (EU) 2024/1689 · Enforcement: August 2, 2026
EU AI Act Risk Classification
The EU AI Act uses a four-tier risk pyramidto determine which obligations apply to your AI system. Classification depends on your system's purpose, sector, and potential impact on people — not its technical architecture.
PROHIBITEDArticle 5
Fine: €35M or 7% of global turnover
Banned outright — cannot be deployed in the EU
These AI practices are considered an unacceptable risk to fundamental rights and are prohibited entirely under the EU AI Act.
Examples
•Social scoring by governments
•Real-time biometric surveillance in public spaces (with narrow exceptions)
•Emotion recognition in workplaces and educational institutions
•Subliminal manipulation techniques that harm people
•Predictive policing based solely on profiling
•Biometric categorisation inferring political, religious, or sexual orientation
HIGH RISKArticles 9–15, 43–49, Annex III
Fine: €30M or 6% of global turnover
Heavily regulated — 12+ obligations before deployment
AI systems in eight regulated sectors (biometrics, education, employment, credit scoring, law enforcement, migration, justice, critical infrastructure) face the most stringent requirements.
Examples
•CV screening and hiring tools
•Credit scoring and loan decision systems
•Student assessment and exam monitoring
•Medical diagnostic AI
•AI used in border control
•Predictive crime risk assessment tools
Key Obligations
✓Risk management system (Article 9)
✓Data governance and quality (Article 10)
✓Technical documentation (Article 11)
✓Automatic logging (Article 12)
✓Human oversight mechanisms (Article 14)
✓EU database registration (Article 49)
LIMITED RISKArticle 50
Fine: €15M or 3% of global turnover
Transparency obligations only
AI systems that interact with users or generate synthetic content must disclose their AI nature. Lighter touch than high-risk, but legally binding.
Examples
•Customer service chatbots
•AI-generated text, images, or videos (deepfakes)
•Emotion recognition systems
•AI avatars and virtual assistants
Key Obligations
✓Inform users they are interacting with AI (Article 50(1))
✓Label AI-generated media as synthetic (Article 50(4))
✓Disclose emotion recognition to subjects (Article 50(3))
MINIMAL RISKNo mandatory requirements
No mandatory obligations — voluntary codes encouraged
The vast majority of AI systems fall here. Spam filters, recommendation engines, AI in video games, and most B2B productivity tools are minimal risk. No mandatory compliance requirements, but voluntary codes of conduct are encouraged.
Examples
•AI-powered spam filters
•Content recommendation engines
•AI features in productivity software
•AI in video games
•Simple chatbots with no sensitive decisions
Classify your AI system in 5 minutes
Answer questions about what your AI does and who it affects. Get an instant tier classification with the exact articles that apply.
Disclaimer: This page is for informational purposes only and does not constitute legal advice. Consult a qualified legal professional for advice specific to your organisation.