The EU AI Act classifies AI systems into four risk tiers. High-risk systems face the most stringent obligations — including mandatory risk management, technical documentation, human oversight, and EU database registration — all required before deployment.
An AI system is high-risk if it falls within one of the eight Annex III sectors and poses a significant risk to the health, safety, or fundamental rights of persons.
AI systems used for remote biometric identification of natural persons in public spaces, or biometric categorisation systems inferring sensitive attributes.
AI used as safety components in critical infrastructure such as road traffic, water, gas, heating, and electricity supply.
AI that determines access to or assigns persons to educational institutions, or evaluates learning outcomes, including exam monitoring.
AI used for recruitment, screening, evaluating candidates, making promotion or termination decisions, or monitoring employee performance.
AI used to evaluate creditworthiness, make credit decisions, set insurance premiums, or assess eligibility for essential public or private services.
AI used by police or judicial authorities for individual risk assessments, polygraphs, crime analytics, or evidence reliability assessment.
AI used to assess risks related to persons crossing borders, process visa or asylum applications, or detect undocumented migrants.
AI used to assist judicial authorities in interpreting facts and the law, researching and interpreting legal acts, or applying the law to a specific set of facts.
High-risk systems have 12+ total obligations. See the full checklist →
Answer a few questions and get an instant classification with article references. Free, no account required.
Assess your AI systems free →