Transparency Obligations (Article 50)
Powered by KlarvoEngine: Transparency obligations are identified automatically during Pass 2 of KlarvoEngine classification. KlarvoEngine determines which Article 50 sub-articles apply (50(1), 50(2), 50(3), 50(4)) and generates disclosure text. See How KlarvoEngine Works.
Article 50 of the EU AI Act establishes transparency obligations for certain AI systems, regardless of their risk classification. Even a "minimal risk" system may have transparency duties if it interacts with people or generates synthetic content.
The Five Transparency Scenarios
Scenario 1: Direct Interaction with People
When it applies: Your AI system interacts directly with natural persons (e.g., chatbots, virtual assistants, AI-powered support agents).
What you must do: Inform the person that they are interacting with an AI system — unless this is obvious from the circumstances and context to a reasonably well-informed, observant, and circumspect person.
Implementation:
Evidence to upload: Screenshot or copy of the disclosure notice as displayed to users.
Scenario 2: Synthetic Content Generation
When it applies: Your AI system generates synthetic audio, image, video, or text content.
What you must do: Ensure outputs are marked as artificially generated or manipulated in a machine-readable format. This is primarily a provider obligation, but deployers should verify their provider complies and understand what markings exist.
Implementation:
Evidence to upload: Provider documentation confirming output marking capability, plus examples of marked outputs.
Scenario 3: Emotion Recognition or Biometric Categorisation
When it applies: Your system performs emotion recognition or biometric categorisation on people (in contexts not prohibited by Article 5).
What you must do: Inform the persons exposed to the system about its operation and process personal data in accordance with applicable data protection law.
Implementation:
Evidence to upload: Notice copy/screenshot, updated privacy policy excerpt, DPIA reference.
Scenario 4: Deepfake Disclosure
When it applies: Your system generates or manipulates image, audio, or video content that constitutes a "deep fake" — content that appreciably resembles existing persons, objects, places, or events and would falsely appear authentic.
What you must do: Disclose that the content has been artificially generated or manipulated.
Implementation:
Evidence to upload: Examples of disclosure labels, content policy documentation.
Scenario 5: AI-Generated Text on Public Interest Matters
When it applies: Your AI system generates or manipulates text that is published for the purpose of informing the public on matters of public interest.
What you must do: Disclose that the content was AI-generated or manipulated — unless the content is subject to human editorial review and a natural or legal person holds editorial responsibility for the publication.
Key exception: If a human editor reviews and takes editorial responsibility, the disclosure obligation may not apply. This exception is important for media companies and content platforms.
Evidence to upload: Editorial workflow documentation if claiming the exception; disclosure labels if not.
Documenting Transparency Compliance in Klarvo
For each applicable scenario:
Accessibility
All transparency notices must be accessible — consider users with disabilities. This includes:
Best Practices
🔍 Screen every system: Transparency obligations apply regardless of risk level
📸 Screenshot everything: Capture how disclosures appear to users — this is your primary evidence
🔄 Update on UI changes: Any major UI change should trigger a re-capture of transparency evidence
📋 Check provider compliance: For synthetic content marking, verify your provider's capabilities