The EU AI Act: What Your Business Needs to Know
The world's first comprehensive AI regulation is mid-implementation. Some rules are already enforceable. Here's what matters, in plain English.
What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is European Union legislation that regulates artificial intelligence based on the risk it poses to people. It entered into force on August 1, 2024, and is being enforced in phases through 2027.
It applies to any organisation that develops, deploys, imports, or distributes AI systems within the EU — regardless of where the organisation is based. If your AI affects people in the EU, the Act applies to you.
The Act classifies AI into four risk tiers: prohibited (banned outright), high-risk (heavy compliance requirements), limited risk (transparency obligations), and minimal risk (no specific requirements beyond basic AI literacy).
Key dates
| Date | What Happens | Status |
|---|---|---|
| August 1, 2024 | Act enters into force | Done |
| February 2, 2025 | Prohibited practices banned, AI literacy required | In force |
| August 2, 2025 | GPAI model obligations, governance structures live | In force |
| August 2, 2026 | Article 50 transparency, high-risk (Annex III), full penalties | 4.3 months away |
| August 2, 2027 | High-risk for product safety AI (Annex I) | Upcoming |
The four risk tiers
Unacceptable Risk (Banned)
8 AI practices are completely prohibited. Already enforceable since February 2, 2025. Penalties: up to €35 million or 7% of global turnover.
Examples: Social scoring, workplace emotion recognition, untargeted facial recognition scraping.
Read the full list →High Risk (Heavy Regulation)
AI systems in 8 sensitive categories require full compliance: risk management, technical documentation, conformity assessment, CE marking, post-market monitoring, and incident reporting.
Examples: AI recruitment tools, credit scoring, medical device AI, law enforcement AI.
See all Annex III categories →Limited Risk (Transparency)
AI that interacts with people or generates content must disclose its presence. This is what Article 50 covers — and it's where most SMEs land.
Examples: Chatbots, AI content generators, deepfake tools, AI accessibility overlays.
Understand your Article 50 obligations →Minimal Risk
The vast majority of AI systems. No specific AI Act obligations beyond AI literacy for staff. Still subject to GDPR and other existing regulation.
Examples: Spam filters, AI analytics, AI image compression, AI-powered search.
Penalties
| Violation | Maximum Penalty |
|---|---|
| Prohibited practices (Article 5) | €35 million or 7% of global turnover (whichever is higher) |
| Other obligations including Article 50 | €15 million or 3% of global turnover (whichever is higher) |
| Incorrect information to authorities | €7.5 million or 1% of global turnover (whichever is higher) |
SME note: For SMEs, the penalty is whichever amount is lower, not higher. This is a deliberate SME protection in the Act. No penalties have been issued under the AI Act as of March 2026. However, the European Commission has shown willingness to impose large fines under related digital regulations (€120M fine on X under the DSA in December 2025).
What should you do now?
Audit your AI use
Make a list of every AI tool your organisation uses. Chatbots, analytics, translation, content generation, image tools, security tools.
Klarvo's AI inventory makes this easy →Check for prohibited practices
Are you using emotion recognition in the workplace? Social scoring? These are already illegal.
Screen your practices →Classify your AI systems
Determine which risk tier each system falls into. Most SME tools will be minimal or limited risk.
Use our high-risk checker →Implement Article 50 transparency
If you use chatbots or AI content tools, you need to disclose this to users by August 2, 2026.
Install AI Transparency for free →Prepare documentation
For high-risk systems, start building your compliance file: risk assessments, human oversight plans, vendor documentation.
See templates →