Skip to main content
HIGH-RISK CLASSIFICATION

Is your AI system high-risk? KlarvoEngine tells you in 60 seconds.

Describe your system. KlarvoEngine maps it against all 8 Annex III domains and tells you — with article citations — exactly what category it falls in.

The 8 Annex III categories

1. Biometrics

Remote biometric identification (not 1:1 verification), biometric categorisation by inference, emotion recognition.

Examples: Facial recognition at building entrances, AI that categorises people by ethnicity from photos.

2. Critical infrastructure

Safety components in management of critical digital infrastructure, road traffic, water/gas/heating/electricity supply.

Examples: AI controlling power grid distribution, AI traffic management systems.

3. Education

Admissions decisions, learning outcome evaluation, education level assessment, exam behaviour monitoring.

Examples: AI proctoring software, automated grading systems, AI-driven student admissions scoring.

4. Employment

Recruitment and selection (job ads targeting, application filtering, candidate evaluation), employment decisions (promotion, termination, task allocation, performance monitoring).

Examples: AI CV screening, automated interview scoring, AI performance dashboards used for promotion decisions.

5. Essential services

Public assistance eligibility, creditworthiness evaluation (exception: fraud detection), life/health insurance risk assessment and pricing, emergency call classification.

Examples: AI credit scoring, AI-driven benefits eligibility checks, insurance pricing algorithms.

6. Law enforcement

Victim risk assessment, polygraphs, evidence reliability assessment, re-offending risk (not solely profiling-based), profiling during investigation.

Examples: AI evidence analysis, recidivism risk tools.

7. Migration and border control

Polygraphs, risk assessment, application examination, detection/identification of persons (exception: travel document verification).

Examples: AI visa screening, border risk assessment.

8. Justice and democracy

Assisting judicial authorities in fact/law research, influencing election outcomes or voting behaviour (exception: campaign logistics tools).

Examples: AI legal research assistants used in courts, election influence systems.

The critical exemption — Article 6(3)

Not everything in Annex III is automatically high-risk. A system is NOT high-risk if it meets at least one condition:

  • Performs a narrow procedural task
  • Improves a prior human activity
  • Detects decision-making patterns without replacing human assessment
  • Performs a preparatory task

However: AI systems that profile natural persons are always high-risk, regardless of these exemptions.

The Commission's guidelines clarifying this exemption are overdue (missed the February 2, 2026 deadline).

What high-risk compliance requires

If your AI is high-risk, you need all of this (Articles 8-17, 43, 47-49, 72-73):

Risk management system — continuous, iterative, throughout the AI lifecycle
Data governance — representative, error-free training/validation/testing data
Technical documentation — pre-market, kept current, available to authorities
Automatic logging — event recording for traceability
Transparency to deployers — instructions covering capabilities, limitations, oversight measures
Human oversight — ability to override, interrupt, or shut down the system
Accuracy and robustness — declared accuracy levels, resilience, failsafes
Cybersecurity — protection against manipulation
Quality management system — documented policies covering all compliance areas
Conformity assessment — self-assessment for most systems; third-party for biometric ID
CE marking and EU Declaration of Conformity
EU database registration — before placing on market
Post-market monitoring — documented plan
Serious incident reporting — within 15 days (10 for death, 2 for widespread impact)

Cost estimates

Estimated costs: €200,000–€500,000 for initial implementation (50-250 employees), €70,000–€150,000/year ongoing.

Timeline

Original deadline: August 2, 2026. The Digital Omnibus proposal (supported by European Parliament 101-9 on March 18, 2026) would push Annex III to December 2, 2027 and Annex I to August 2, 2028. Expert consensus: prepare as if August 2026 is real, plan as if December 2027 is the likely date.

See the full enforcement timeline →

60-second high-risk classification

KlarvoEngine maps your system against all 8 Annex III domains and identifies exactly which high-risk category applies — with article citations.

Classify your systems →

Classify your AI systems. Free.

KlarvoEngine — 3-pass regulatory classification with article-cited memos. No credit card required.

Run free classification