Skip to main content
ARTICLE 5 — PROHIBITED AI

8 AI practices are banned in the EU. Is yours one of them?

KlarvoEngine screens every system for all Article 5 prohibitions as part of classification. Instant results. Penalties: up to €35 million or 7% of global turnover.

The 8 prohibited practices

1

Subliminal or manipulative AI

AI that uses techniques beyond a person's consciousness, or manipulative/deceptive methods, to distort behaviour in ways that cause significant harm. This covers dark patterns powered by AI.

2

Exploiting vulnerabilities

AI that targets people's age, disability, or social/economic situation to distort their behaviour and cause harm. Think: AI systems that specifically manipulate elderly users or children.

3

Social scoring

AI that evaluates or classifies people based on social behaviour or personal characteristics, where the score leads to detrimental treatment in unrelated contexts. This applies to both public and private actors — expanded from the original proposal.

4

Predictive policing based on profiling

AI that predicts whether a specific individual will commit a crime, based solely on profiling or personality assessment. Exception: AI supporting human assessment based on verifiable facts linked to criminal activity.

5

Untargeted facial recognition scraping

Building facial recognition databases by scraping images from the internet or CCTV without consent. Targeted collection under lawful basis remains permitted.

6

Workplace and education emotion recognition

AI that infers employees' or students' emotions from biometric data. Exceptions: medical reasons (e.g., detecting pain) and safety reasons (e.g., driver fatigue monitoring). This directly bans AI video interview emotion analysis.

7

Biometric categorisation for sensitive attributes

AI that categorises people based on biometric data to infer race, political opinions, religion, sexual orientation, or trade union membership. Exception: law enforcement labelling of lawfully acquired datasets.

8

Real-time remote biometric identification in public spaces

AI-powered facial recognition in public spaces by law enforcement. Three narrow exceptions: searching for specific victims, preventing imminent threats, and identifying suspects of serious crimes. Requires judicial authorisation and a fundamental rights impact assessment.

Am I affected?

Most SMEs are not directly using these practices. But audit your AI tools to be sure:

Do you use AI-powered video interview analysis that assesses candidates' emotions? → Prohibited (practice 6)

Do you use AI to score customers based on their social media activity? → Potentially prohibited (practice 3)

Do you use AI chatbots with persuasion techniques targeting vulnerable users? → Potentially prohibited (practice 1 or 2)

Classify your AI systems. Free.

KlarvoEngine — 3-pass regulatory classification with article-cited memos. No credit card required.

Screen your AI systems