Monitoring Events
Not everything is an incident. Monitoring events capture observations, signals, and trends that don't meet the incident threshold but are important to document — performance drift, user complaints, bias signals, and operational anomalies.
What Qualifies as a Monitoring Event
| Performance drift | Accuracy dropped from 95% to 88% this quarter |
| User complaints | Three customers reported irrelevant chatbot responses this week |
| Bias signals | Approval rates differ by 15% across demographic groups |
| Operational anomalies | System response time increased significantly |
| Near-miss | AI output was incorrect but caught by human reviewer before action |
| Vendor change | Provider updated the underlying model version |
| Configuration change | Threshold or parameter adjustment |
Why Log Monitoring Events
Pattern detection: Individual events may be minor, but patterns indicate systemic issues
Audit evidence: Demonstrates active monitoring (DEP-05 control)
Reassessment triggers: Accumulated events may warrant reclassification
Continuous improvement: Data-driven improvement of AI system governance
Creating a Monitoring Event
Navigate to Incidents → Monitoring tab
Click Log Event
Fill in:
- Title: Brief description
- Category: Performance / Complaint / Bias / Anomaly / Near-miss / Change / Other
- Linked AI System: Which system this relates to
- Description: What was observed
- Severity: Informational / Minor / Notable
- Action Taken: What (if anything) was done in response
- Follow-up Required: Yes / No
Attach supporting data (metrics screenshots, complaint records, etc.)
Click Save
Monitoring Cadence
For high-risk AI systems, establish a regular monitoring cadence:
| What to Monitor | Method | Frequency |
| Model accuracy/performance | Automated metrics dashboard | Weekly |
| Bias metrics | Statistical analysis across protected groups | Quarterly |
| User complaints | Support ticket analysis | Monthly |
| Drift detection | Comparison against baseline | Monthly |
| Incident patterns | Incident log review | Monthly |
Event-to-Incident Escalation
When monitoring events indicate a significant issue:
Review the pattern of events
Assess whether the threshold for an incident has been met
If yes, create an incident record linked to the relevant monitoring events
If no, continue monitoring and document your assessment
Best Practices
📊 Log consistently: Even "nothing unusual" is worth documenting periodically — it shows active monitoring
📈 Look for patterns: One complaint is an event; five similar complaints are a trend worth investigating
🔗 Link to systems: Always connect monitoring events to the relevant AI system
📋 Define thresholds: Decide in advance what level of drift or complaint volume triggers escalation