Skip to main content
FRIA
12 min readUpdated 2026-02-15

FRIA Wizard Walkthrough

Step-by-step guide to completing a Fundamental Rights Impact Assessment using Klarvo's FRIA Wizard — every section explained with practical examples and regulatory context.

FRIA Wizard Walkthrough

KlarvoEngine pre-fill: The FRIA wizard is pre-filled from your KlarvoEngine classification data. Affected groups, deployment context, and risk factors are populated automatically — you're not starting from a blank page.

Klarvo's FRIA Wizard guides you through a structured Fundamental Rights Impact Assessment aligned to the six mandatory elements specified in Article 27. This walkthrough explains each section with practical guidance.

Before You Start

Ensure you have:

  • A classified high-risk AI system in your inventory
  • The system's classification memo reviewed
  • Access to the system's operational documentation (vendor instructions, SOPs)
  • Input from the oversight owner and relevant stakeholders
  • Section A: Overview & Scope

    What you capture:

  • FRIA title (e.g., "FRIA — Customer Credit Scoring System Q1 2026")
  • Linked AI system (auto-populated)
  • Assessment owner (the person leading this FRIA)
  • Date started and expected deployment date
  • Whether this is a first-use FRIA or an update
  • Whether a completed DPIA exists that can be leveraged
  • Practical tip: If a DPIA exists, reference it explicitly. FRIA builds on DPIA but covers additional rights beyond data protection.

    Section B: Process Description (Article 27(a))

    What you capture:

  • Detailed description of how your organisation uses the AI system in its operations
  • The intended purpose within your specific process
  • Which decision points are affected by AI outputs
  • How human oversight is integrated into the process
  • Attached process diagrams or SOPs (recommended)
  • Example: "The AI system is used in our loan application review process. It scores applicant creditworthiness based on financial history, income data, and employment records. The score is presented to a human loan officer who makes the final approval/rejection decision. The AI score is one of five factors the officer considers."

    Section C: Time Period & Frequency (Article 27(b))

    What you capture:

  • Expected duration of deployment (6 months, 1 year, ongoing)
  • Frequency of use (continuous, daily, weekly, monthly, ad hoc)
  • Estimated scale: how many people are affected per month
  • Example: "Ongoing deployment. Used daily. Approximately 500 loan applications assessed per month, affecting 500 individuals."

    Section D: Affected Categories of Persons (Article 27(c))

    What you capture:

  • Categories of people likely affected (applicants, customers, employees, students, patients, general public)
  • Whether vulnerable groups are present (minors, elderly, people with disabilities, economically disadvantaged)
  • How affected persons will be informed about the system's use
  • Accessibility considerations
  • Example: "Primarily affects loan applicants aged 18-70. Some applicants may be economically disadvantaged (the purpose is credit assessment). Applicants are informed via the loan application disclosure notice."

    Section E: Risks to Fundamental Rights (Article 27(d))

    This is the core of the FRIA. For each relevant fundamental right, identify potential harms:

    Right CategoryExample Risks
    Non-discrimination / fairnessBias in scoring against protected groups; disparate impact on minorities
    Privacy & data protectionExcessive data collection; profiling without adequate legal basis
    Freedom of expressionContent filtering affecting legitimate expression (if applicable)
    Worker rightsUnfair performance evaluation; excessive monitoring
    Due process / contestabilityNo mechanism to challenge AI-influenced decisions
    Access to essential servicesUnfair denial of credit, insurance, or healthcare
    Safety / wellbeingPhysical or psychological harm from incorrect outputs

    For each identified risk:

  • Rate likelihood (Low / Medium / High)
  • Rate severity (Low / Medium / High)
  • Provide supporting evidence or reasoning
  • Cross-reference with any existing DPIA analysis
  • Section F: Human Oversight Measures (Article 27(e))

    What you capture:

  • How oversight is designed (HITL, HOTL, HOOTL)
  • Competence and training requirements for oversight personnel
  • Whether the oversight person has authority to intervene, override, or stop the system
  • Evidence of oversight capability (training records, SOP, authority documentation)
  • Section G: Mitigation, Governance & Complaints (Article 27(f))

    What you capture:

  • Specific mitigations mapped to each identified risk (e.g., "Regular bias testing quarterly to address discrimination risk")
  • Governance arrangements (who reviews FRIA results, escalation paths)
  • Complaint mechanism (how affected persons can raise concerns)
  • Monitoring plan (what metrics are tracked, how often reviewed)
  • Reassessment triggers (what changes would require a FRIA update)
  • Section H: Approval & Notification

    What you capture:

  • Final conclusion: Approve / Approve with Mitigations / Do Not Deploy
  • Approver name(s) and date
  • Whether market surveillance authority notification is required
  • Upload notification submission evidence (if applicable)
  • Auto-Generated Outputs

    After completing the FRIA Wizard:

  • FRIA Report PDF: Professional, audit-ready document covering all sections
  • FRIA Result Summary: One-page executive summary
  • Mitigation Tasks: Auto-created tasks for each identified mitigation measure
  • Monitoring Schedule: Calendar items for periodic review
  • FRIA Templates (Pro & Enterprise)

    Pro and Enterprise users can start from 5 pre-built templates for common AI use cases: HR Recruitment, Customer Chatbot, Credit Scoring, Content Moderation, and Healthcare AI. Each template pre-fills the wizard with realistic data — risks, oversight measures, mitigation plans — that you can customise for your specific deployment.

    Board-Ready Summary (Pro & Enterprise)

    Step 7 (Approval) now includes a formatted executive summary designed for board presentation. It features:

  • Risk metric cards: Total risks, high severity count, overall risk level
  • Risk heat map: Visual likelihood × severity matrix
  • Affected groups badges: At-a-glance view of who is impacted
  • Key findings: Auto-generated bullet points summarising the assessment
  • Deployment recommendation: Based on your risk profile — "Deploy with standard monitoring", "Deploy with enhanced mitigations", or "Deployment review recommended"
  • Best Practices

    📋 Be specific: Generic risk statements like "there may be bias" are insufficient — describe the specific mechanism of harm
    👥 Consult stakeholders: Include perspectives from system users, affected communities (where practical), and subject matter experts
    🔄 Plan for updates: Set a reassessment date and define what changes trigger a FRIA update
    📄 Reference the DPIA: If one exists, cross-reference rather than duplicate analysis