Skip to main content

The EU AI Act: What Your Business Needs to Know

The world's first comprehensive AI regulation is mid-implementation. Some rules are already enforceable. Here's what matters, in plain English.

What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is European Union legislation that regulates artificial intelligence based on the risk it poses to people. It entered into force on August 1, 2024, and is being enforced in phases through 2027.

It applies to any organisation that develops, deploys, imports, or distributes AI systems within the EU — regardless of where the organisation is based. If your AI affects people in the EU, the Act applies to you.

The Act classifies AI into four risk tiers: prohibited (banned outright), high-risk (heavy compliance requirements), limited risk (transparency obligations), and minimal risk (no specific requirements beyond basic AI literacy).

Key dates

DateWhat HappensStatus
August 1, 2024Act enters into force
Done
February 2, 2025Prohibited practices banned, AI literacy required
In force
August 2, 2025GPAI model obligations, governance structures live
In force
August 2, 2026Article 50 transparency, high-risk (Annex III), full penalties
4.3 months away
August 2, 2027High-risk for product safety AI (Annex I)
Upcoming
Digital Omnibus note: The European Commission's "Digital Omnibus on AI" proposal may delay Annex III high-risk rules to December 2, 2027. The European Parliament voted 101-9 to support this delay on March 18, 2026. Trilogue negotiations are ongoing. However, Article 50 transparency obligations remain at August 2, 2026 — this deadline is not moving.

The four risk tiers

Unacceptable Risk (Banned)

8 AI practices are completely prohibited. Already enforceable since February 2, 2025. Penalties: up to €35 million or 7% of global turnover.

Examples: Social scoring, workplace emotion recognition, untargeted facial recognition scraping.

Read the full list →

High Risk (Heavy Regulation)

AI systems in 8 sensitive categories require full compliance: risk management, technical documentation, conformity assessment, CE marking, post-market monitoring, and incident reporting.

Examples: AI recruitment tools, credit scoring, medical device AI, law enforcement AI.

See all Annex III categories →

Limited Risk (Transparency)

AI that interacts with people or generates content must disclose its presence. This is what Article 50 covers — and it's where most SMEs land.

Examples: Chatbots, AI content generators, deepfake tools, AI accessibility overlays.

Understand your Article 50 obligations →

Minimal Risk

The vast majority of AI systems. No specific AI Act obligations beyond AI literacy for staff. Still subject to GDPR and other existing regulation.

Examples: Spam filters, AI analytics, AI image compression, AI-powered search.

Penalties

ViolationMaximum Penalty
Prohibited practices (Article 5)€35 million or 7% of global turnover (whichever is higher)
Other obligations including Article 50€15 million or 3% of global turnover (whichever is higher)
Incorrect information to authorities€7.5 million or 1% of global turnover (whichever is higher)

SME note: For SMEs, the penalty is whichever amount is lower, not higher. This is a deliberate SME protection in the Act. No penalties have been issued under the AI Act as of March 2026. However, the European Commission has shown willingness to impose large fines under related digital regulations (€120M fine on X under the DSA in December 2025).

What should you do now?

1

Audit your AI use

Make a list of every AI tool your organisation uses. Chatbots, analytics, translation, content generation, image tools, security tools.

Klarvo's AI inventory makes this easy →
2

Check for prohibited practices

Are you using emotion recognition in the workplace? Social scoring? These are already illegal.

Screen your practices →
3

Classify your AI systems

Determine which risk tier each system falls into. Most SME tools will be minimal or limited risk.

Use our high-risk checker →
4

Implement Article 50 transparency

If you use chatbots or AI content tools, you need to disclose this to users by August 2, 2026.

Install AI Transparency for free →
5

Prepare documentation

For high-risk systems, start building your compliance file: risk assessments, human oversight plans, vendor documentation.

See templates →

Official Resources

Start your AI inventory today

Free plan, no credit card required.

Create free account