Executive Summary
The EU AI Act enters full enforcement on August 2, 2026, introducing fines of up to 35 million EUR or 7% of global annual turnover. Despite this, our survey of 200 European enterprises reveals that only 23% have completed their AI system inventory — the foundational first step of compliance. This report provides a practical compliance roadmap.
Risk Classification Framework
The Act establishes four risk tiers for AI systems. Understanding where your systems fall is the critical first step.
Unacceptable Risk
Systems that are outright prohibited: social scoring, real-time biometric identification in public spaces (with narrow exceptions), and manipulation techniques targeting vulnerable groups.
High Risk
The broadest category, covering AI systems used in employment, education, law enforcement, critical infrastructure, and access to essential services. These systems face the strictest requirements.
Limited Risk
Systems with transparency obligations — primarily chatbots, deepfake generators, and emotion recognition systems. Users must be informed they are interacting with AI.
Minimal Risk
The majority of AI systems fall here with no specific obligations beyond general safety requirements.
Technical Documentation Requirements
High-risk AI systems must maintain comprehensive technical documentation covering the system's intended purpose, design specifications, training data governance, performance metrics, and risk management measures.
Common Compliance Gaps
Our analysis reveals five recurring gaps across enterprises preparing for compliance:
- Incomplete AI system inventories — organizations undercount their AI systems by an average of 40%
- Missing data governance documentation — training data provenance is rarely tracked with sufficient granularity
- Absent human oversight mechanisms — the Act requires meaningful human oversight, not rubber-stamp approval
- Inadequate risk assessments — most assessments focus on technical risk while ignoring societal impact
- No conformity assessment procedures — particularly for high-risk systems requiring third-party assessment
Recommendations
Start with a comprehensive AI system inventory. Establish data governance frameworks. Implement meaningful human oversight. Document everything — the Act's burden of proof falls on the deploying organization.