Mitigation Measures & Oversight
After identifying fundamental rights risks in your FRIA, Article 27 requires you to document mitigation measures, governance arrangements, complaint mechanisms, and monitoring plans. This guide provides practical frameworks for each.
Designing Effective Mitigations
Every identified risk should have at least one mitigation measure. Effective mitigations are:
Specific: Clearly describes what action is taken
Measurable: You can verify it's working
Assigned: Someone is responsible for implementation
Time-bound: Has a deadline or ongoing schedule
Proportionate: Effort matches the risk level
Mitigation Categories
Technical Mitigations
| Mitigation | Addresses | Example |
| Bias testing | Discrimination risk | Quarterly bias audit across gender, age, ethnicity |
| Performance monitoring | Accuracy/reliability risk | Weekly accuracy threshold checks with alerts |
| Input validation | Data quality risk | Automated checks for data completeness and format |
| Output constraints | Safety risk | Hard limits on AI output values; rejection of out-of-range results |
| Explainability features | Due process risk | SHAP/LIME explanations generated for each decision |
| Logging & audit trail | Accountability risk | All inputs, outputs, and decisions logged with timestamps |
Organisational Mitigations
| Mitigation | Addresses | Example |
| Human review step | Multiple rights | Loan officer reviews every AI-flagged rejection before final decision |
| Training program | Competence risk | Operators complete AI literacy + domain-specific training before system access |
| Escalation procedure | Incident risk | Clear escalation path when AI output seems incorrect or harmful |
| Regular review cadence | Drift risk | Quarterly review of system performance, bias metrics, and incident patterns |
| Stakeholder consultation | Representation risk | Annual feedback session with employee representatives about AI workplace tools |
Governance Mitigations
| Mitigation | Addresses | Example |
| Oversight SOP | Authority risk | Documented procedure granting oversight owner authority to pause the system |
| Change management | Evolution risk | Any model update triggers re-evaluation of FRIA risk ratings |
| Vendor oversight | Supply chain risk | Annual vendor review including updated bias test results and security docs |
| Incident response plan | Harm response | Defined process for containment, notification, and resolution of AI incidents |
Human Oversight Design (Article 27(e))
The FRIA must document how human oversight is designed. Key elements:
Oversight Model:
HITL (Human-in-the-Loop): Human must approve every AI output before it takes effect — strongest oversight
HOTL (Human-on-the-Loop): Human monitors AI outputs in real-time and can intervene — balanced approach
HOOTL (Human-out-of-the-Loop): AI operates autonomously; human reviews retrospectively — weakest oversight
Competence Requirements: What training and knowledge the oversight person needs — document the minimum competency profile.
Authority: The oversight person must have the organisational authority to:
Override AI outputs
Pause or stop the system
Escalate concerns without negative consequences
Complaint Mechanisms (Article 27(f))
You must provide affected persons with a way to raise concerns:
Accessible: Easy to find and use (not buried in T&Cs)
Responsive: Defined response timeframes (e.g., acknowledge within 5 business days)
Effective: Complaints lead to investigation and, where warranted, remediation
Documented: All complaints logged with outcomes
Example implementation: "Affected individuals can submit a complaint via our support form at [URL], referencing the AI-assisted decision. Complaints are reviewed by a human within 5 business days. If the complaint is upheld, the decision is reconsidered with human-only assessment."
Monitoring Plans
Define what you monitor and how often:
| What to Monitor | Method | Frequency |
| Model accuracy | Automated metrics | Weekly |
| Bias metrics (by protected group) | Statistical testing | Quarterly |
| User complaints about AI | Support ticket analysis | Monthly |
| Incident count & severity | Incident log review | Monthly |
| Oversight effectiveness | Intervention rate analysis | Quarterly |
| Evidence validity | Expiration check | Monthly |
Reassessment Triggers
Define what changes require a FRIA update:
Material change to the AI system (new model version, new features)
Expansion to new user groups or geographies
Serious incident involving the system
Change in vendor or underlying model
Regulatory guidance or enforcement action relevant to your use case
Significant increase in scale of affected persons
Best Practices
🎯 Map mitigations to risks: Every identified risk should have at least one corresponding mitigation
📋 Create tasks: Use Klarvo to auto-generate tasks for each mitigation action
📊 Measure effectiveness: Don't just implement mitigations — verify they work
🔄 Living document: The FRIA should evolve with your system — not gather dust