Skip to main content
FRIA
8 min readUpdated 2026-02-15

Identifying Fundamental Rights Risks

Practical guide to identifying and assessing risks to fundamental rights when conducting a FRIA — with right-by-right analysis frameworks and SME-relevant examples.

Identifying Fundamental Rights Risks

Section (d) of the FRIA requires you to identify specific risks of harm to fundamental rights. This guide provides a practical framework for conducting this analysis — right by right, with SME-relevant examples.

The Rights Framework

The EU Charter of Fundamental Rights provides the reference. For AI systems, the most commonly relevant rights are:

1. Non-Discrimination & Equality (Articles 21–22 Charter)

Risk: AI outputs may systematically disadvantage people based on protected characteristics (race, gender, age, disability, religion, sexual orientation).

How it manifests:

  • Training data reflects historical biases → AI reproduces them
  • Proxy variables correlate with protected characteristics → indirect discrimination
  • Unequal performance across demographic groups → disparate impact
  • Assessment questions:

  • Has the model been tested for bias across protected groups?
  • Are there known demographic performance gaps?
  • Could proxy variables (postcode, name patterns, education institution) correlate with protected characteristics?
  • SME example: An ATS (applicant tracking system) that filters CVs may score candidates from certain universities higher — if those universities have demographic skew, this creates indirect discrimination.

    2. Privacy & Data Protection (Articles 7–8 Charter)

    Risk: AI processing may involve disproportionate data collection, opaque profiling, or insufficient legal basis.

    Assessment questions:

  • Is data collection proportionate to the purpose?
  • Is there a valid legal basis for processing?
  • Are data subjects informed about AI involvement in processing?
  • Can individuals exercise their GDPR rights (access, erasure, objection)?
  • Cross-reference: Your DPIA analysis directly feeds this section.

    3. Freedom of Expression & Information (Article 11 Charter)

    Risk: AI systems that filter, moderate, rank, or generate content may affect freedom of expression.

    Assessment questions:

  • Does the system filter or suppress content? On what basis?
  • Could automated moderation create chilling effects?
  • Is AI-generated content replacing human editorial judgment?
  • 4. Worker Rights (Articles 27–31 Charter)

    Risk: AI in workplace contexts may affect working conditions, fair treatment, privacy, or collective rights.

    Assessment questions:

  • Does the system monitor employee behaviour or productivity?
  • Are hiring/promotion/termination decisions influenced by AI?
  • Have workers/representatives been consulted?
  • Is there a mechanism for workers to challenge AI-influenced decisions?
  • 5. Due Process & Right to Contest (Article 47 Charter)

    Risk: People affected by AI-assisted decisions may have no effective way to understand, challenge, or contest the decision.

    Assessment questions:

  • Can affected persons understand why a decision was made?
  • Is there a mechanism to request human review of AI-assisted decisions?
  • Are appeal or complaint procedures documented and accessible?
  • 6. Access to Essential Services

    Risk: AI systems in credit, insurance, healthcare, housing, or public services may unfairly restrict access.

    Assessment questions:

  • Could certain groups be systematically denied access?
  • Are alternative pathways available if the AI decision is unfavourable?
  • Is there human review before final denial of service?
  • 7. Safety & Wellbeing (Article 3 Charter)

    Risk: Incorrect, unreliable, or manipulative AI outputs could cause physical or psychological harm.

    Assessment questions:

  • What happens if the AI output is wrong? What's the worst-case impact?
  • Are there safety-critical pathways where AI errors could cause physical harm?
  • Could AI interactions cause psychological distress?
  • Risk Rating Framework

    For each identified risk, rate:

    DimensionLowMediumHigh
    LikelihoodUnlikely; strong controls existPossible; controls partially mitigateProbable; weak or no controls
    SeverityMinor inconvenience; easily reversibleSignificant impact; difficult to reverseSerious harm; irreversible or large-scale

    Combine into an overall risk level:

    Low SeverityMedium SeverityHigh Severity
    Low LikelihoodMinimalLowMedium
    Medium LikelihoodLowMediumHigh
    High LikelihoodMediumHighCritical

    Documenting Risks

    For each risk, record:

  • Right affected: Which fundamental right
  • Risk description: Specific mechanism of harm
  • Affected group: Who is at risk
  • Likelihood: Low / Medium / High with reasoning
  • Severity: Low / Medium / High with reasoning
  • Existing mitigations: What's already in place
  • Residual risk: After mitigations, what remains
  • Additional mitigations needed: What else should be done
  • Best Practices

    🎯 Be specific: "Bias risk" is too vague — describe the specific bias mechanism and affected group
    📊 Use data where available: If the vendor provides bias test results, reference them
    👥 Consult domain experts: HR for employment AI, credit professionals for lending AI
    🔄 Revisit regularly: Risk profiles change as systems evolve and more data becomes available