EU AI Act Requirements for AI Agents
The EU AI Act entered into force in August 2024 with a phased implementation schedule. For AI agent builders, the most relevant provisions are in Title III (High-Risk AI Systems) and the general-purpose AI obligations. Here is what matters for agent deployments.
Are AI Agents High-Risk?
The Act classifies AI systems by risk level. AI agents may qualify as high-risk under Annex III if they operate in:
- Critical infrastructure management
- Employment and worker management
- Essential services (credit, insurance, public benefits)
- Law enforcement and border control
- Justice and democratic processes
An AI agent that makes or influences decisions in these areas falls under the full high-risk requirements. An AI agent that writes marketing copy probably does not.
The classification depends on the use case, not the technology. The same underlying agent framework can be low-risk in one context and high-risk in another.
Key Requirements for Agent Builders
Article 9: Risk Management. High-risk AI systems must have a risk management system that identifies, evaluates, and mitigates risks throughout the system's lifecycle. For agents, this means: identify what can go wrong (prompt injection, tool misuse, data exfiltration), evaluate the likelihood and impact, and implement controls.
Article 14: Human Oversight. High-risk systems must be designed so that humans can effectively oversee them. This includes the ability to understand the system's capabilities and limitations, correctly interpret outputs, decide not to use the system in specific situations, and intervene or stop operation. For agents, this translates to: approval workflows, kill switches, and monitoring dashboards.
Article 12: Record-Keeping. High-risk systems must automatically record events (logs) during operation. Records must be sufficient to determine the system's inputs, outputs, and decisions. For agents: hash-chained audit receipts covering every tool call and policy decision.
Article 13: Transparency. Users must be informed that they are interacting with an AI system. The system's capabilities and limitations must be documented.
Implementation Timeline
- August 2025: Prohibited AI practices take effect
- August 2026: High-risk AI obligations in Annex III become enforceable
- August 2027: Full enforcement for all remaining provisions
The August 2026 deadline is the critical one for most agent deployments. If your agent operates in a high-risk domain, you have until then to comply.
Practical Steps
- Classify your agent use cases by risk level
- For high-risk applications: implement risk management, human oversight, logging, and transparency
- Document everything: risk assessments, control implementations, testing results
- Build technical controls (policy engines, approval workflows, audit trails) rather than relying on process documentation alone
Tools like Authensor provide the technical implementation layer: policy-based authorization (Article 9 controls), approval workflows (Article 14 human oversight), and hash-chained receipts (Article 12 record-keeping).