Skip to main content
Classification & Risk Assessment
6 min readUpdated 2026-02-15

Transparency Obligations

Complete guide to Article 50 transparency requirements — when disclosure is required, what form it should take, and how to document compliance in Klarvo.

Transparency Obligations (Article 50)

Powered by KlarvoEngine: Transparency obligations are identified automatically during Pass 2 of KlarvoEngine classification. KlarvoEngine determines which Article 50 sub-articles apply (50(1), 50(2), 50(3), 50(4)) and generates disclosure text. See How KlarvoEngine Works.

Article 50 of the EU AI Act establishes transparency obligations for certain AI systems, regardless of their risk classification. Even a "minimal risk" system may have transparency duties if it interacts with people or generates synthetic content.

The Five Transparency Scenarios

Scenario 1: Direct Interaction with People

When it applies: Your AI system interacts directly with natural persons (e.g., chatbots, virtual assistants, AI-powered support agents).

What you must do: Inform the person that they are interacting with an AI system — unless this is obvious from the circumstances and context to a reasonably well-informed, observant, and circumspect person.

Implementation:

  • Display a clear notice at the start of interaction: "You are chatting with an AI assistant"
  • For voice systems: Audio disclosure at conversation start
  • The notice must be timely, clear, and intelligible
  • Evidence to upload: Screenshot or copy of the disclosure notice as displayed to users.

    Scenario 2: Synthetic Content Generation

    When it applies: Your AI system generates synthetic audio, image, video, or text content.

    What you must do: Ensure outputs are marked as artificially generated or manipulated in a machine-readable format. This is primarily a provider obligation, but deployers should verify their provider complies and understand what markings exist.

    Implementation:

  • Verify your AI provider marks outputs (e.g., C2PA metadata, watermarking)
  • For text: Consider visible disclaimers like "AI-generated content"
  • Document what marking method is used
  • Evidence to upload: Provider documentation confirming output marking capability, plus examples of marked outputs.

    Scenario 3: Emotion Recognition or Biometric Categorisation

    When it applies: Your system performs emotion recognition or biometric categorisation on people (in contexts not prohibited by Article 5).

    What you must do: Inform the persons exposed to the system about its operation and process personal data in accordance with applicable data protection law.

    Implementation:

  • Prominent notice where the system operates (e.g., signage, on-screen notice)
  • Privacy notice update to cover biometric/emotion data processing
  • Data protection impact assessment (likely required under GDPR)
  • Evidence to upload: Notice copy/screenshot, updated privacy policy excerpt, DPIA reference.

    Scenario 4: Deepfake Disclosure

    When it applies: Your system generates or manipulates image, audio, or video content that constitutes a "deep fake" — content that appreciably resembles existing persons, objects, places, or events and would falsely appear authentic.

    What you must do: Disclose that the content has been artificially generated or manipulated.

    Implementation:

  • Visible label on generated content: "This content was generated using AI"
  • For video/audio: Disclosure at start and end
  • Metadata marking in addition to visible labels
  • Evidence to upload: Examples of disclosure labels, content policy documentation.

    Scenario 5: AI-Generated Text on Public Interest Matters

    When it applies: Your AI system generates or manipulates text that is published for the purpose of informing the public on matters of public interest.

    What you must do: Disclose that the content was AI-generated or manipulated — unless the content is subject to human editorial review and a natural or legal person holds editorial responsibility for the publication.

    Key exception: If a human editor reviews and takes editorial responsibility, the disclosure obligation may not apply. This exception is important for media companies and content platforms.

    Evidence to upload: Editorial workflow documentation if claiming the exception; disclosure labels if not.

    Documenting Transparency Compliance in Klarvo

    For each applicable scenario:

  • The wizard flags the relevant transparency obligation during Step 10
  • Tasks are auto-created: "Implement [scenario] disclosure" and "Upload disclosure evidence"
  • Controls TRN-01 through TRN-07 are attached to the system
  • Upload evidence (screenshots, policy docs, provider confirmations) to the Evidence Vault
  • Link evidence to the relevant TRN control
  • Accessibility

    All transparency notices must be accessible — consider users with disabilities. This includes:

  • Sufficient contrast for visual notices
  • Alternative text for images
  • Screen reader compatibility
  • Clear, simple language (avoid legal jargon in user-facing notices)
  • Best Practices

    🔍 Screen every system: Transparency obligations apply regardless of risk level
    📸 Screenshot everything: Capture how disclosures appear to users — this is your primary evidence
    🔄 Update on UI changes: Any major UI change should trigger a re-capture of transparency evidence
    📋 Check provider compliance: For synthetic content marking, verify your provider's capabilities