Skip to main content
Classification & Risk Assessment
10 min readUpdated 2026-02-15

High-Risk Categories (Annex III)

Complete reference guide to the nine high-risk AI use case categories defined in Annex III of the EU AI Act, with SME-relevant examples and deployer obligation mapping.

High-Risk Categories (Annex III)

Powered by KlarvoEngine: High-risk screening is performed automatically during Pass 2 of KlarvoEngine classification. KlarvoEngine maps your system against all 8 Annex III domains and identifies the specific category that applies. See 3-Pass Pipeline.

Annex III of the EU AI Act lists specific use cases where AI systems are considered "high-risk" and subject to extensive compliance requirements. Most obligations for these systems apply from 2 August 2026, with an extended transition until 2 August 2027 for high-risk AI embedded in regulated products listed in Annex I.

Category 1: Biometrics

Scope: AI systems for biometric identification, categorisation, or verification beyond what is prohibited under Article 5.

Common examples:

  • Airport facial recognition gates
  • Age verification systems
  • Fingerprint-based access control
  • Customer identity verification in banking (KYC)
  • SME relevance: If you use biometric access control or identity verification powered by AI, this category likely applies.


    Category 2: Critical Infrastructure

    Scope: AI used as safety components in management and operation of critical infrastructure — digital infrastructure, road traffic, water, gas, heating, electricity supply.

    Common examples:

  • Smart grid load balancing
  • SCADA system AI components
  • Network traffic optimization
  • Traffic management AI
  • SME relevance: Most relevant for utility companies, telecom providers, and infrastructure management firms.


    Category 3: Education & Vocational Training

    Scope: AI affecting educational access, assessment, or learning outcomes.

    Covered uses: Admissions decisions, student assessment/grading, learning outcome evaluation, proctoring and cheating detection, educational resource access decisions.

    Common examples: University admission scoring, automated essay grading, exam proctoring, adaptive learning platforms that gate content based on AI assessment.

    SME relevance: EdTech companies and training providers — if your AI influences who gets admitted, how they're graded, or what learning resources they access.


    Category 4: Employment & Worker Management

    Scope: AI affecting employment decisions, worker management, and access to self-employment.

    Covered uses: Recruitment and screening, job advertising targeting, CV/application filtering, interview assessment, performance evaluation, task allocation, promotion/termination decisions, workplace monitoring.

    Common examples: ATS candidate ranking, video interview analysis, performance management AI, shift scheduling optimisation, productivity monitoring tools.

    SME relevance: This is the most common high-risk category for SMEs. If you use any AI tool in your hiring pipeline, performance reviews, or workforce management, check this category carefully.


    Category 5: Essential Private & Public Services

    Scope: AI affecting access to essential services and benefits.

    Covered uses: Creditworthiness assessment, credit scoring, risk assessment for life/health insurance, emergency service dispatch, public benefit eligibility, healthcare access prioritisation.

    Common examples: Bank loan decisioning, insurance premium calculation, emergency triage AI, benefit fraud detection, hospital bed allocation.


    Category 6: Law Enforcement

    Scope: AI supporting law enforcement activities — evidence reliability assessment, crime risk assessment, profiling during investigation, lie detection, crime pattern prediction, deepfake detection in evidence.

    SME relevance: Rare for most SMEs unless providing tools to law enforcement agencies.


    Category 7: Migration, Asylum & Border Control

    Scope: AI in immigration and border management — polygraph tools, security risk assessment, visa/asylum processing, document verification, irregular migration risk assessment.

    SME relevance: Relevant for companies providing border management or document verification technology.


    Category 8: Administration of Justice & Democratic Processes

    Scope: AI assisting judicial and democratic institutions — researching legal facts, applying law to facts, alternative dispute resolution, election influence.

    Common examples: Legal research AI, sentencing recommendation systems, contract analysis tools with decision-support, voter registration assistance.

    SME relevance: LegalTech companies providing AI-powered research or analysis tools.


    Category 9: Safety Components of Regulated Products

    Scope: AI that is a safety component of products covered by EU product legislation listed in Annex I — medical devices, motor vehicles, aviation, marine equipment, toys, machinery, lifts, PPE.

    Common examples: ADAS/autonomous driving components, medical diagnostic AI, industrial robot safety systems, drone flight control.

    SME relevance: If you build AI that's a safety component in a regulated product, you may have both provider obligations under the AI Act and product-specific obligations.


    Implications of High-Risk Classification

    If your AI system matches any Annex III category, deployer obligations under Article 26 include:

    ObligationWhat You Must Do
    Use per instructionsFollow provider's instructions for use
    Human oversightAssign competent persons with authority to intervene/stop
    Input data qualityEnsure input data is relevant and representative (if under your control)
    Monitor operationActive monitoring per provider instructions
    Keep logs ≥ 6 monthsRetain automatically generated logs under your control
    Report incidentsNotify provider and authorities of serious incidents
    Worker notificationInform workers/representatives before workplace deployment
    FRIA (if applicable)Conduct Fundamental Rights Impact Assessment
    Registration (if applicable)Register in EU database (public authorities)

    Best Practices

    🔍 Check every system: Even if you think it's minimal risk — the Annex III categories are broad
    📋 Document your reasoning: Record why you determined a system does or doesn't match a category
    🤝 Involve the business: System users often know details about impact that the compliance team doesn't
    🔄 Reassess on change: New features, new user groups, or new deployment regions may change the category match