Skip to main content
FRIA
7 min readUpdated 2026-02-15

Mitigation Measures & Oversight

How to design and document effective mitigation measures for fundamental rights risks identified in your FRIA, including oversight arrangements, complaint mechanisms, and monitoring plans.

Mitigation Measures & Oversight

After identifying fundamental rights risks in your FRIA, Article 27 requires you to document mitigation measures, governance arrangements, complaint mechanisms, and monitoring plans. This guide provides practical frameworks for each.

Designing Effective Mitigations

Every identified risk should have at least one mitigation measure. Effective mitigations are:

  • Specific: Clearly describes what action is taken
  • Measurable: You can verify it's working
  • Assigned: Someone is responsible for implementation
  • Time-bound: Has a deadline or ongoing schedule
  • Proportionate: Effort matches the risk level
  • Mitigation Categories

    Technical Mitigations

    MitigationAddressesExample
    Bias testingDiscrimination riskQuarterly bias audit across gender, age, ethnicity
    Performance monitoringAccuracy/reliability riskWeekly accuracy threshold checks with alerts
    Input validationData quality riskAutomated checks for data completeness and format
    Output constraintsSafety riskHard limits on AI output values; rejection of out-of-range results
    Explainability featuresDue process riskSHAP/LIME explanations generated for each decision
    Logging & audit trailAccountability riskAll inputs, outputs, and decisions logged with timestamps

    Organisational Mitigations

    MitigationAddressesExample
    Human review stepMultiple rightsLoan officer reviews every AI-flagged rejection before final decision
    Training programCompetence riskOperators complete AI literacy + domain-specific training before system access
    Escalation procedureIncident riskClear escalation path when AI output seems incorrect or harmful
    Regular review cadenceDrift riskQuarterly review of system performance, bias metrics, and incident patterns
    Stakeholder consultationRepresentation riskAnnual feedback session with employee representatives about AI workplace tools

    Governance Mitigations

    MitigationAddressesExample
    Oversight SOPAuthority riskDocumented procedure granting oversight owner authority to pause the system
    Change managementEvolution riskAny model update triggers re-evaluation of FRIA risk ratings
    Vendor oversightSupply chain riskAnnual vendor review including updated bias test results and security docs
    Incident response planHarm responseDefined process for containment, notification, and resolution of AI incidents

    Human Oversight Design (Article 27(e))

    The FRIA must document how human oversight is designed. Key elements:

    Oversight Model:

  • HITL (Human-in-the-Loop): Human must approve every AI output before it takes effect — strongest oversight
  • HOTL (Human-on-the-Loop): Human monitors AI outputs in real-time and can intervene — balanced approach
  • HOOTL (Human-out-of-the-Loop): AI operates autonomously; human reviews retrospectively — weakest oversight
  • Competence Requirements: What training and knowledge the oversight person needs — document the minimum competency profile.

    Authority: The oversight person must have the organisational authority to:

  • Override AI outputs
  • Pause or stop the system
  • Escalate concerns without negative consequences
  • Complaint Mechanisms (Article 27(f))

    You must provide affected persons with a way to raise concerns:

  • Accessible: Easy to find and use (not buried in T&Cs)
  • Responsive: Defined response timeframes (e.g., acknowledge within 5 business days)
  • Effective: Complaints lead to investigation and, where warranted, remediation
  • Documented: All complaints logged with outcomes
  • Example implementation: "Affected individuals can submit a complaint via our support form at [URL], referencing the AI-assisted decision. Complaints are reviewed by a human within 5 business days. If the complaint is upheld, the decision is reconsidered with human-only assessment."

    Monitoring Plans

    Define what you monitor and how often:

    What to MonitorMethodFrequency
    Model accuracyAutomated metricsWeekly
    Bias metrics (by protected group)Statistical testingQuarterly
    User complaints about AISupport ticket analysisMonthly
    Incident count & severityIncident log reviewMonthly
    Oversight effectivenessIntervention rate analysisQuarterly
    Evidence validityExpiration checkMonthly

    Reassessment Triggers

    Define what changes require a FRIA update:

  • Material change to the AI system (new model version, new features)
  • Expansion to new user groups or geographies
  • Serious incident involving the system
  • Change in vendor or underlying model
  • Regulatory guidance or enforcement action relevant to your use case
  • Significant increase in scale of affected persons
  • Best Practices

    🎯 Map mitigations to risks: Every identified risk should have at least one corresponding mitigation
    📋 Create tasks: Use Klarvo to auto-generate tasks for each mitigation action
    📊 Measure effectiveness: Don't just implement mitigations — verify they work
    🔄 Living document: The FRIA should evolve with your system — not gather dust