Skip to main content
Classification & Risk Assessment
8 min readUpdated 2026-02-15

The Classification Engine Explained

Deep dive into how Klarvo's classification engine determines risk levels and maps applicable EU AI Act obligations through a four-stage sequential assessment.

The Classification Engine Explained

This is KlarvoEngine — Klarvo's 3-pass regulatory classification pipeline. See KlarvoEngine Overview for the full guide.

Klarvo's Classification Engine is the core logic that determines your AI system's risk level and maps applicable EU AI Act obligations. Every wizard answer feeds into this engine, which processes decisions through four sequential stages.

How Classification Works

The engine follows a strict sequential flow — each stage can block progression to the next:

Stage 1: AI System Definition → Is this an AI system under the Act?

Stage 2: Prohibited Screening → Any Article 5 red flags?

Stage 3: High-Risk Screening → Annex III category match?

Stage 4: Transparency Check → Article 50 obligations?

Final Classification + Obligation Mapping

Stage 1: AI System Definition

The EU AI Act applies only to "AI systems" as defined in Article 3(1). The Commission has published guidelines to help interpret this definition.

Key criteria evaluated:

  • Does it infer outputs from inputs to achieve objectives?
  • Does it produce predictions, recommendations, decisions, classifications, or generated content?
  • Does it operate with some degree of autonomy (not purely manual rules)?
  • Does it use ML, deep learning, statistical, or logic-based approaches?
  • Possible outcomes:

  • Likely AI System → Continues to Stage 2
  • Likely Not AI System → Recorded as "Out of scope" but remains in inventory for governance purposes
  • ⚠️ Needs Review → Flagged for expert/legal review; task auto-created
  • 💡 Systems that are "Likely Not AI" still benefit from good governance. Keeping them in your inventory shows due diligence in your assessment process.

    Stage 2: Prohibited Practices (Article 5)

    This is the hardest stop in the classification flow. Eight questions screen against prohibited AI practices that have applied since 2 February 2025:

  • Harmful manipulation or deception causing significant harm
  • Exploitation of vulnerabilities (age, disability, socio-economic)
  • Social scoring for unrelated decisions
  • Criminal risk prediction based solely on profiling/personality traits
  • Untargeted facial recognition database scraping
  • Workplace/education emotion inference
  • Biometric categorisation revealing protected characteristics
  • Real-time remote biometric ID in public spaces for law enforcement
  • Possible outcomes:

  • No indicators → Continues to Stage 3
  • ⚠️ Potential prohibited practice → Classification set to "Blocked", legal review task created, no further classification until cleared
  • Unsure on any question → Also triggers legal review
  • Stage 3: High-Risk Screening (Annex III)

    Nine use-case categories where AI systems are considered "high-risk":

    #CategoryCommon SME Examples
    1BiometricsFacial recognition access control, age verification
    2Critical InfrastructureSmart building management, energy optimization
    3EducationLMS with AI grading, proctoring tools
    4EmploymentATS screening, performance analytics, scheduling AI
    5Essential ServicesCredit scoring, insurance underwriting
    6Law EnforcementForensic tools (rare for SMEs)
    7MigrationDocument verification (rare for SMEs)
    8JusticeLegal research AI, contract analysis
    9Safety ComponentsMedical device AI, ADAS components

    Possible outcomes:

  • No matches → Continues to Stage 4
  • ⚠️ High-Risk Candidate → Full deployer obligations (Article 26) apply
  • 🏭 Safety Component → Provider obligations may additionally apply
  • Stage 4: Transparency Check (Article 50)

    Even non-high-risk systems may have transparency obligations:

  • Direct AI interaction → Must inform people unless obvious
  • Synthetic content generation → Must mark outputs as AI-generated in machine-readable way
  • Emotion recognition → Must inform exposed persons
  • Deepfake generation → Must disclose artificial nature
  • Public-interest text → Must disclose unless editorial control exception applies
  • Final Classification Output

    LevelMeaningKey Obligations
    Minimal RiskNo specific EU AI Act requirementsBest practices, internal governance
    Limited RiskTransparency obligations applyArticle 50 disclosures
    High-Risk CandidateFull deployer dutiesArticle 26 controls, logging ≥6 months, oversight, FRIA if applicable
    BlockedPotential prohibited practiceLegal review required; use suspended

    Classification Confidence

    Each classification includes a confidence level:

  • High: Clear, unambiguous answers across all stages
  • Medium: Some "Unsure" answers or edge cases — recommend reviewer sign-off
  • Low: Multiple uncertain areas — requires expert input before relying on classification
  • Audit Trail

    Every classification is fully auditable:

  • All questions and answers stored with timestamps
  • Decision path documented at each stage
  • Reviewer name and approval date captured
  • Version history maintained for re-classifications
  • Override reasons recorded when human judgment differs from engine suggestion