The Classification Engine Explained
This is KlarvoEngine — Klarvo's 3-pass regulatory classification pipeline. See KlarvoEngine Overview for the full guide.
Klarvo's Classification Engine is the core logic that determines your AI system's risk level and maps applicable EU AI Act obligations. Every wizard answer feeds into this engine, which processes decisions through four sequential stages.
How Classification Works
The engine follows a strict sequential flow — each stage can block progression to the next:
Stage 1: AI System Definition → Is this an AI system under the Act?↓
Stage 2: Prohibited Screening → Any Article 5 red flags?
↓
Stage 3: High-Risk Screening → Annex III category match?
↓
Stage 4: Transparency Check → Article 50 obligations?
↓
Final Classification + Obligation Mapping
Stage 1: AI System Definition
The EU AI Act applies only to "AI systems" as defined in Article 3(1). The Commission has published guidelines to help interpret this definition.
Key criteria evaluated:
Possible outcomes:
💡 Systems that are "Likely Not AI" still benefit from good governance. Keeping them in your inventory shows due diligence in your assessment process.
Stage 2: Prohibited Practices (Article 5)
This is the hardest stop in the classification flow. Eight questions screen against prohibited AI practices that have applied since 2 February 2025:
Possible outcomes:
Stage 3: High-Risk Screening (Annex III)
Nine use-case categories where AI systems are considered "high-risk":
| # | Category | Common SME Examples |
| 1 | Biometrics | Facial recognition access control, age verification |
| 2 | Critical Infrastructure | Smart building management, energy optimization |
| 3 | Education | LMS with AI grading, proctoring tools |
| 4 | Employment | ATS screening, performance analytics, scheduling AI |
| 5 | Essential Services | Credit scoring, insurance underwriting |
| 6 | Law Enforcement | Forensic tools (rare for SMEs) |
| 7 | Migration | Document verification (rare for SMEs) |
| 8 | Justice | Legal research AI, contract analysis |
| 9 | Safety Components | Medical device AI, ADAS components |
Possible outcomes:
Stage 4: Transparency Check (Article 50)
Even non-high-risk systems may have transparency obligations:
Final Classification Output
| Level | Meaning | Key Obligations |
| Minimal Risk | No specific EU AI Act requirements | Best practices, internal governance |
| Limited Risk | Transparency obligations apply | Article 50 disclosures |
| High-Risk Candidate | Full deployer duties | Article 26 controls, logging ≥6 months, oversight, FRIA if applicable |
| Blocked | Potential prohibited practice | Legal review required; use suspended |
Classification Confidence
Each classification includes a confidence level:
Audit Trail
Every classification is fully auditable: