Identifying Fundamental Rights Risks
Section (d) of the FRIA requires you to identify specific risks of harm to fundamental rights. This guide provides a practical framework for conducting this analysis — right by right, with SME-relevant examples.
The Rights Framework
The EU Charter of Fundamental Rights provides the reference. For AI systems, the most commonly relevant rights are:
1. Non-Discrimination & Equality (Articles 21–22 Charter)
Risk: AI outputs may systematically disadvantage people based on protected characteristics (race, gender, age, disability, religion, sexual orientation).
How it manifests:
Assessment questions:
SME example: An ATS (applicant tracking system) that filters CVs may score candidates from certain universities higher — if those universities have demographic skew, this creates indirect discrimination.
2. Privacy & Data Protection (Articles 7–8 Charter)
Risk: AI processing may involve disproportionate data collection, opaque profiling, or insufficient legal basis.
Assessment questions:
Cross-reference: Your DPIA analysis directly feeds this section.
3. Freedom of Expression & Information (Article 11 Charter)
Risk: AI systems that filter, moderate, rank, or generate content may affect freedom of expression.
Assessment questions:
4. Worker Rights (Articles 27–31 Charter)
Risk: AI in workplace contexts may affect working conditions, fair treatment, privacy, or collective rights.
Assessment questions:
5. Due Process & Right to Contest (Article 47 Charter)
Risk: People affected by AI-assisted decisions may have no effective way to understand, challenge, or contest the decision.
Assessment questions:
6. Access to Essential Services
Risk: AI systems in credit, insurance, healthcare, housing, or public services may unfairly restrict access.
Assessment questions:
7. Safety & Wellbeing (Article 3 Charter)
Risk: Incorrect, unreliable, or manipulative AI outputs could cause physical or psychological harm.
Assessment questions:
Risk Rating Framework
For each identified risk, rate:
| Dimension | Low | Medium | High |
| Likelihood | Unlikely; strong controls exist | Possible; controls partially mitigate | Probable; weak or no controls |
| Severity | Minor inconvenience; easily reversible | Significant impact; difficult to reverse | Serious harm; irreversible or large-scale |
Combine into an overall risk level:
| Low Severity | Medium Severity | High Severity |
| Low Likelihood | Minimal | Low | Medium |
| Medium Likelihood | Low | Medium | High |
| High Likelihood | Medium | High | Critical |
Documenting Risks
For each risk, record:
Best Practices
🎯 Be specific: "Bias risk" is too vague — describe the specific bias mechanism and affected group
📊 Use data where available: If the vendor provides bias test results, reference them
👥 Consult domain experts: HR for employment AI, credit professionals for lending AI
🔄 Revisit regularly: Risk profiles change as systems evolve and more data becomes available