EU AI Act High-Risk AI Systems: Classification Criteria for Agents
Not every AI agent is high-risk under the EU AI Act. The classification depends on what the agent does, not how it works. Understanding the classification criteria prevents both over-compliance (wasting resources on unnecessary controls) and under-compliance (missing required obligations).
The Classification Framework
The Act uses two classification paths:
Path 1: Product safety legislation (Annex I). If the AI system is a safety component of a product covered by existing EU product safety legislation (medical devices, vehicles, machinery), it is automatically high-risk.
Path 2: Standalone high-risk areas (Annex III). AI systems used in specific domains are classified as high-risk. This is the path most relevant to AI agents.
Annex III Categories
-
Biometric identification and categorization. Remote biometric identification systems. If your agent processes biometric data for identification, it is high-risk.
-
Critical infrastructure. AI systems used as safety components in road traffic, water, gas, heating, or electricity supply management. An agent that controls or influences infrastructure operations falls here.
-
Education and vocational training. Systems that determine access to education, evaluate students, or assess learning levels. An AI agent that grades assignments or decides admissions qualifies.
-
Employment and worker management. Systems that screen job applicants, make recruitment decisions, evaluate performance, or monitor workers. AI agents used in HR processes are high-risk.
-
Essential services. Credit scoring, insurance pricing, public benefit eligibility assessment. An agent that influences these decisions is high-risk.
-
Law enforcement. Individual risk assessment, polygraphs, evidence evaluation. AI agents in law enforcement contexts are high-risk.
-
Migration and border control. Visa processing, asylum applications, border surveillance. Agents in immigration contexts qualify.
-
Justice and democratic processes. Systems that assist judicial decisions or influence election outcomes.
What Is NOT High-Risk
The Act explicitly excludes:
- AI systems intended for narrow procedural tasks
- AI systems intended to improve the result of a previously completed human activity
- AI systems that do not materially influence the outcome of decision-making
A customer service chatbot that answers FAQs is not high-risk. An agent that writes internal reports for human review is likely not high-risk. An agent that autonomously decides insurance claim amounts is high-risk.
The Significant Impact Exception
Even within Annex III categories, a system is NOT high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. The provider must document this assessment. If the system performs a "narrow procedural task" or "improves the result of a previously completed human activity," it may be excluded.
Practical Classification Steps
- List every use case for your AI agent
- Map each use case to Annex III categories
- For each match, assess whether the system poses significant risk
- Document the classification reasoning
- For high-risk classifications, implement Title III requirements
- For non-high-risk classifications, keep the documentation but reduce compliance scope
The classification is the foundation. Everything else follows from it.