Incident Management
Under Article 26, deployers of high-risk AI systems must monitor operation, report serious incidents, and be prepared to suspend use when risk is identified. Klarvo's incident management system provides the structure for tracking, responding to, and documenting AI-related incidents.
What Counts as an Incident?
AI incidents requiring documentation include:
Severity Levels
| Level | Description | Response Time | Escalation |
| Critical | Immediate harm, safety risk, potential prohibited practice | < 24 hours | Leadership + legal + authority (if serious) |
| High | Significant rights impact, large-scale effect | < 48 hours | Compliance Owner + provider |
| Medium | Moderate impact, contained to limited scope | < 1 week | System Owner + Compliance Owner |
| Low | Minor issue, no harm, easily correctable | < 2 weeks | System Owner |
Incident Workflow
Detection → Logging → Triage → Containment → Investigation → Resolution → Postmortem → Closure↓ ↓
Notification Reassessment
(internal + external) (if needed)
Serious Incident Reporting (Article 26(5))
Deployers must report serious incidents to:
A "serious incident" is one that results in:
Integration with System Reassessment
Incidents can trigger system reassessment:
Best Practices
🚨 Log immediately: Don't wait for investigation to start — capture the facts as they're known
📞 Notify early: Err on the side of over-communication
🔍 Root cause analysis: Every resolved incident should identify the underlying cause
📝 Postmortem: Document lessons learned and preventive measures
🔄 Update procedures: Improve incident response based on learnings