FRIA Wizard Walkthrough
KlarvoEngine pre-fill: The FRIA wizard is pre-filled from your KlarvoEngine classification data. Affected groups, deployment context, and risk factors are populated automatically — you're not starting from a blank page.
Klarvo's FRIA Wizard guides you through a structured Fundamental Rights Impact Assessment aligned to the six mandatory elements specified in Article 27. This walkthrough explains each section with practical guidance.
Before You Start
Ensure you have:
Section A: Overview & Scope
What you capture:
Practical tip: If a DPIA exists, reference it explicitly. FRIA builds on DPIA but covers additional rights beyond data protection.
Section B: Process Description (Article 27(a))
What you capture:
Example: "The AI system is used in our loan application review process. It scores applicant creditworthiness based on financial history, income data, and employment records. The score is presented to a human loan officer who makes the final approval/rejection decision. The AI score is one of five factors the officer considers."
Section C: Time Period & Frequency (Article 27(b))
What you capture:
Example: "Ongoing deployment. Used daily. Approximately 500 loan applications assessed per month, affecting 500 individuals."
Section D: Affected Categories of Persons (Article 27(c))
What you capture:
Example: "Primarily affects loan applicants aged 18-70. Some applicants may be economically disadvantaged (the purpose is credit assessment). Applicants are informed via the loan application disclosure notice."
Section E: Risks to Fundamental Rights (Article 27(d))
This is the core of the FRIA. For each relevant fundamental right, identify potential harms:
| Right Category | Example Risks |
| Non-discrimination / fairness | Bias in scoring against protected groups; disparate impact on minorities |
| Privacy & data protection | Excessive data collection; profiling without adequate legal basis |
| Freedom of expression | Content filtering affecting legitimate expression (if applicable) |
| Worker rights | Unfair performance evaluation; excessive monitoring |
| Due process / contestability | No mechanism to challenge AI-influenced decisions |
| Access to essential services | Unfair denial of credit, insurance, or healthcare |
| Safety / wellbeing | Physical or psychological harm from incorrect outputs |
For each identified risk:
Section F: Human Oversight Measures (Article 27(e))
What you capture:
Section G: Mitigation, Governance & Complaints (Article 27(f))
What you capture:
Section H: Approval & Notification
What you capture:
Auto-Generated Outputs
After completing the FRIA Wizard:
FRIA Templates (Pro & Enterprise)
Pro and Enterprise users can start from 5 pre-built templates for common AI use cases: HR Recruitment, Customer Chatbot, Credit Scoring, Content Moderation, and Healthcare AI. Each template pre-fills the wizard with realistic data — risks, oversight measures, mitigation plans — that you can customise for your specific deployment.
Board-Ready Summary (Pro & Enterprise)
Step 7 (Approval) now includes a formatted executive summary designed for board presentation. It features:
Best Practices
📋 Be specific: Generic risk statements like "there may be bias" are insufficient — describe the specific mechanism of harm
👥 Consult stakeholders: Include perspectives from system users, affected communities (where practical), and subject matter experts
🔄 Plan for updates: Set a reassessment date and define what changes trigger a FRIA update
📄 Reference the DPIA: If one exists, cross-reference rather than duplicate analysis