AI System Definition Test
Powered by KlarvoEngine: The AI system definition test is performed automatically during Pass 1 of KlarvoEngine classification. You don't need to run it separately — KlarvoEngine checks Article 3(1) criteria as part of every classification. See How KlarvoEngine Works.
The EU AI Act applies only to systems that meet its specific definition of an "AI system" (Article 3(1)). The European Commission has published guidelines to help organisations determine whether a particular system falls within scope. Klarvo's Definition Test operationalises these guidelines into a structured questionnaire.
The Legal Definition
Article 3(1) defines an AI system as:
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, recommendations, decisions, or content that can influence physical or virtual environments.
What the Definition Test Evaluates
The wizard asks five key questions that map directly to elements of this definition:
Question 1: Does the system infer outputs from inputs?
What "infer" means: The system generates outputs that go beyond simple rule execution — it derives patterns, makes predictions, or produces content that wasn't explicitly programmed for every possible input.
Examples that qualify: A chatbot generating responses, a recommendation engine suggesting products, a classifier categorising images.
Examples that don't qualify: A simple IF/THEN rule engine with fully deterministic logic, a lookup table, a basic calculator.
Question 2: What types of outputs does it produce?
Select all that apply:
Systems producing at least one of these output types are more likely to qualify.
Question 3: Does it operate with some degree of autonomy?
Autonomy means the system can function without full human direction at every step. It doesn't mean fully autonomous — even systems with human-in-the-loop can have operational autonomy in generating outputs.
Key distinction: A system where a human reviews and approves every output still operates autonomously in generating those outputs.
Question 4: Does it adapt or learn after deployment?
Adaptiveness is part of the definition but is not strictly required — a static ML model still qualifies if it meets the other criteria.
Question 5: What technical approach is used?
Select all that apply:
Interpreting the Result
| Result | Meaning | Next Steps |
| Likely AI System | Meets the definition criteria | Proceed to prohibited practices screening |
| Likely Not AI System | Doesn't meet key criteria | Stays in inventory as "Out of scope"; memo generated |
| Needs Review | Ambiguous answers; edge case | Task created for compliance/legal review |
Common Edge Cases
Simple chatbots with scripted responses: If the chatbot follows a fixed decision tree with no ML, it's likely not an AI system. If it uses NLP/LLM for response generation, it likely is.
Business intelligence dashboards: Standard BI tools (SQL queries, pivot tables) are typically not AI systems. If they include predictive analytics, anomaly detection, or ML-driven recommendations, they may qualify.
RPA (Robotic Process Automation): Pure RPA executing scripted steps is typically not AI. RPA with ML-driven decision points or document understanding likely qualifies.
Spreadsheet formulas and macros: Not AI systems, even if complex. However, spreadsheet add-ins using ML models in the background would qualify.
Storing Your Result
The Definition Test result is permanently stored in your AI system record with:
This creates an auditable record demonstrating you assessed whether the Act applies — an important element of due diligence even when the answer is "Likely Not."
Best Practices
🔍 When in doubt, include it: It's better to classify a borderline system than to exclude it and face questions later
📝 Write clear rationale: Explain your reasoning in plain language — an auditor should understand why you reached your conclusion
👥 Get a second opinion: For edge cases, assign a reviewer to confirm the conclusion
🔄 Reassess on changes: If the system's capabilities change (e.g., vendor adds ML features), re-run the definition test