← Blog

EU AI Act High-Risk AI Systems: Classification Criteria for Agents

15 Research Lab
eu-ai-actcomplianceagent-safety

Not every AI agent is high-risk under the EU AI Act. The classification depends on what the agent does, not how it works. Understanding the classification criteria prevents both over-compliance (wasting resources on unnecessary controls) and under-compliance (missing required obligations).

The Classification Framework

The Act uses two classification paths:

Path 1: Product safety legislation (Annex I). If the AI system is a safety component of a product covered by existing EU product safety legislation (medical devices, vehicles, machinery), it is automatically high-risk.

Path 2: Standalone high-risk areas (Annex III). AI systems used in specific domains are classified as high-risk. This is the path most relevant to AI agents.

Annex III Categories

  1. Biometric identification and categorization. Remote biometric identification systems. If your agent processes biometric data for identification, it is high-risk.

  2. Critical infrastructure. AI systems used as safety components in road traffic, water, gas, heating, or electricity supply management. An agent that controls or influences infrastructure operations falls here.

  3. Education and vocational training. Systems that determine access to education, evaluate students, or assess learning levels. An AI agent that grades assignments or decides admissions qualifies.

  4. Employment and worker management. Systems that screen job applicants, make recruitment decisions, evaluate performance, or monitor workers. AI agents used in HR processes are high-risk.

  5. Essential services. Credit scoring, insurance pricing, public benefit eligibility assessment. An agent that influences these decisions is high-risk.

  6. Law enforcement. Individual risk assessment, polygraphs, evidence evaluation. AI agents in law enforcement contexts are high-risk.

  7. Migration and border control. Visa processing, asylum applications, border surveillance. Agents in immigration contexts qualify.

  8. Justice and democratic processes. Systems that assist judicial decisions or influence election outcomes.

What Is NOT High-Risk

The Act explicitly excludes:

  • AI systems intended for narrow procedural tasks
  • AI systems intended to improve the result of a previously completed human activity
  • AI systems that do not materially influence the outcome of decision-making

A customer service chatbot that answers FAQs is not high-risk. An agent that writes internal reports for human review is likely not high-risk. An agent that autonomously decides insurance claim amounts is high-risk.

The Significant Impact Exception

Even within Annex III categories, a system is NOT high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. The provider must document this assessment. If the system performs a "narrow procedural task" or "improves the result of a previously completed human activity," it may be excluded.

Practical Classification Steps

  1. List every use case for your AI agent
  2. Map each use case to Annex III categories
  3. For each match, assess whether the system poses significant risk
  4. Document the classification reasoning
  5. For high-risk classifications, implement Title III requirements
  6. For non-high-risk classifications, keep the documentation but reduce compliance scope

The classification is the foundation. Everything else follows from it.