Skip to main content
Classification & Risk Assessment
6 min readUpdated 2026-02-15

AI System Definition Test

How Klarvo determines whether your system qualifies as an 'AI system' under the EU AI Act definition, aligned with Commission interpretation guidelines.

AI System Definition Test

Powered by KlarvoEngine: The AI system definition test is performed automatically during Pass 1 of KlarvoEngine classification. You don't need to run it separately — KlarvoEngine checks Article 3(1) criteria as part of every classification. See How KlarvoEngine Works.

The EU AI Act applies only to systems that meet its specific definition of an "AI system" (Article 3(1)). The European Commission has published guidelines to help organisations determine whether a particular system falls within scope. Klarvo's Definition Test operationalises these guidelines into a structured questionnaire.

Article 3(1) defines an AI system as:

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, recommendations, decisions, or content that can influence physical or virtual environments.

What the Definition Test Evaluates

The wizard asks five key questions that map directly to elements of this definition:

Question 1: Does the system infer outputs from inputs?

What "infer" means: The system generates outputs that go beyond simple rule execution — it derives patterns, makes predictions, or produces content that wasn't explicitly programmed for every possible input.

Examples that qualify: A chatbot generating responses, a recommendation engine suggesting products, a classifier categorising images.

Examples that don't qualify: A simple IF/THEN rule engine with fully deterministic logic, a lookup table, a basic calculator.

Question 2: What types of outputs does it produce?

Select all that apply:

  • Predictions (forecasting future outcomes)
  • Recommendations (suggesting actions)
  • Decisions (making choices autonomously or semi-autonomously)
  • Classifications (categorising inputs into groups)
  • Generated content (text, images, audio, video)
  • Scores (numerical ratings or rankings)
  • Systems producing at least one of these output types are more likely to qualify.

    Question 3: Does it operate with some degree of autonomy?

    Autonomy means the system can function without full human direction at every step. It doesn't mean fully autonomous — even systems with human-in-the-loop can have operational autonomy in generating outputs.

    Key distinction: A system where a human reviews and approves every output still operates autonomously in generating those outputs.

    Question 4: Does it adapt or learn after deployment?

  • Yes: The system updates its model, weights, or behaviour based on new data after deployment (e.g., online learning, fine-tuning in production)
  • No: The model is static after deployment — same inputs always produce the same outputs
  • Unknown: You're not sure whether the vendor updates the model continuously
  • Adaptiveness is part of the definition but is not strictly required — a static ML model still qualifies if it meets the other criteria.

    Question 5: What technical approach is used?

    Select all that apply:

  • Machine learning (supervised, unsupervised, reinforcement)
  • Deep learning (neural networks)
  • Large language models (GPT, Claude, Gemini, etc.)
  • Statistical models (regression, Bayesian)
  • Rules/logic-based with inference capability
  • Optimization algorithms
  • Unknown
  • Interpreting the Result

    ResultMeaningNext Steps
    Likely AI SystemMeets the definition criteriaProceed to prohibited practices screening
    Likely Not AI SystemDoesn't meet key criteriaStays in inventory as "Out of scope"; memo generated
    Needs ReviewAmbiguous answers; edge caseTask created for compliance/legal review

    Common Edge Cases

    Simple chatbots with scripted responses: If the chatbot follows a fixed decision tree with no ML, it's likely not an AI system. If it uses NLP/LLM for response generation, it likely is.

    Business intelligence dashboards: Standard BI tools (SQL queries, pivot tables) are typically not AI systems. If they include predictive analytics, anomaly detection, or ML-driven recommendations, they may qualify.

    RPA (Robotic Process Automation): Pure RPA executing scripted steps is typically not AI. RPA with ML-driven decision points or document understanding likely qualifies.

    Spreadsheet formulas and macros: Not AI systems, even if complex. However, spreadsheet add-ins using ML models in the background would qualify.

    Storing Your Result

    The Definition Test result is permanently stored in your AI system record with:

  • Your conclusion and confidence level (High/Medium/Low)
  • Written rationale explaining your reasoning
  • Reviewer name and date
  • All individual question answers
  • This creates an auditable record demonstrating you assessed whether the Act applies — an important element of due diligence even when the answer is "Likely Not."

    Best Practices

    🔍 When in doubt, include it: It's better to classify a borderline system than to exclude it and face questions later
    📝 Write clear rationale: Explain your reasoning in plain language — an auditor should understand why you reached your conclusion
    👥 Get a second opinion: For edge cases, assign a reviewer to confirm the conclusion
    🔄 Reassess on changes: If the system's capabilities change (e.g., vendor adds ML features), re-run the definition test