Skip to main content
Classification & Risk Assessment
7 min readUpdated 2026-02-15

Prohibited Practices Screening

Understanding the eight prohibited AI practices under Article 5 of the EU AI Act and how Klarvo screens for them — including what to do when flags are raised.

Prohibited Practices Screening

Powered by KlarvoEngine: Prohibited practices screening is performed automatically during Pass 2 of KlarvoEngine classification. All 8 Article 5 prohibitions are checked as part of every classification. Any "prohibited" flag immediately surfaces in your classification memo. See 3-Pass Pipeline.

Article 5 of the EU AI Act prohibits certain AI practices that pose unacceptable risks to fundamental rights. These prohibitions have applied since 2 February 2025. Klarvo screens every AI system against all eight prohibited categories.

The Eight Prohibited Practices

1. Harmful Manipulation or Deception

What's prohibited: AI systems deploying subliminal, manipulative, or deceptive techniques to materially distort a person's behaviour in a way that causes or is reasonably likely to cause significant harm.

SME examples to watch for:

  • Dark patterns designed to exploit cognitive biases (urgency, scarcity manipulation)
  • Chatbots designed to mislead users about their nature for harmful purposes
  • Persuasion algorithms that distort purchasing behaviour in harmful ways
  • Key nuance: Standard marketing personalisation is generally not prohibited. The threshold is "significant harm" and "materially distorting behaviour."

    2. Exploitation of Vulnerabilities

    What's prohibited: AI exploiting vulnerabilities of specific groups due to age, disability, or socio-economic situation in ways likely to cause significant harm.

    SME examples: Predatory lending targeting cognitive impairments, gambling systems exploiting addiction, scams targeting elderly users.

    3. Social Scoring

    What's prohibited: Evaluating or classifying people based on social behaviour or personality characteristics, where the resulting social score leads to detrimental treatment in unrelated contexts or disproportionate treatment.

    Key nuance: Performance ratings at work tied to work outcomes are generally not social scoring. It becomes prohibited when a score from one context (e.g., social media behaviour) affects unrelated decisions (e.g., credit access).

    4. Criminal Risk Prediction via Profiling

    What's prohibited: Assessing or predicting the risk of a natural person committing a criminal offence based solely on profiling or personality traits — without additional objective, verifiable facts.

    Key nuance: The word "solely" is important. AI that assists investigation using behavioural evidence alongside other facts may not be prohibited.

    5. Untargeted Facial Recognition Scraping

    What's prohibited: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

    6. Workplace/Education Emotion Inference

    What's prohibited: Inferring emotions of people in workplace or educational settings, except where medically or safety-necessary (e.g., fatigue detection for safety-critical operators).

    SME examples: Employee sentiment analysis tools, student engagement monitoring, interview emotion detection.

    7. Biometric Categorisation Revealing Protected Characteristics

    What's prohibited: Biometric categorisation systems that individually categorise people to deduce or infer their race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.

    8. Real-time Remote Biometric Identification in Public Spaces

    What's prohibited: Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow exceptions for specific serious crimes, missing persons, and imminent threats).

    What Happens When a Flag is Raised

    If any screening question is answered "Yes" or "Unsure":

  • Immediate block: The system classification is set to "Blocked"
  • Legal review task created: "Legal review — prohibited practices" assigned to the compliance owner
  • No further classification: The wizard does not proceed to high-risk or transparency screening
  • Documentation required: You must provide context and describe any safeguards
  • Dashboard alert: A critical alert appears on the compliance dashboard
  • The system stays "Blocked" until a reviewer with Compliance Owner or Admin role explicitly clears it after legal review.

    Handling False Positives

    Not every flag indicates an actual prohibition. Context is critical:

  • Emotion AI for medical/safety purposes: Fatigue detection for truck drivers may be exempt from the workplace emotion prohibition
  • Law enforcement exceptions: Narrow exceptions exist for real-time biometric ID in extreme circumstances
  • Contextual assessment: A "social scoring" flag may be a false positive if the system is simply a performance review tool
  • In all cases: document the context, get legal sign-off, record the rationale, and clear the flag with an explicit reviewer confirmation.

    Best Practices

    ⚠️ Take flags seriously: Even if you believe it's a false positive, always complete the legal review process
    📝 Document context thoroughly: Explain why the system's use case doesn't meet the prohibition threshold
    👨‍⚖️ Get legal sign-off: A qualified person must clear prohibited practice flags — this is auditable
    🔄 Re-screen on changes: If system capabilities change, re-run the prohibited practices screening