Skip to main content

All articles

EU AI Act Art. 50: What Your UI Must Show From 02.08.2026 (or Pay 15M)

From 02.08.2026 every AI must out itself. Chatbot disclosure, watermark, deepfake label, emotion recognition notice. Breach: up to 15M EUR or 3%.

Sebastian LangMay 8, 202611 min read

From 02.08.2026 every AI in your product must out itself. 4 of 5 DACH Mittelstand companies have not adapted the UI yet. The penalty: up to 15M EUR or 3% of global annual turnover, whichever is higher. Not 35M, that is Art. 5. But 15M for an 80M revenue mechanical engineering firm in Andernach, Heilbronn or Bielefeld still equals a quarterly margin. Art. 50 is the article with the shortest implementation half-life in the entire AI Act: 4 obligations, 4 different addressees, one deadline, and a fine ceiling that most boards quote incorrectly.

What Art. 50 actually demands

Four paragraphs, four obligations. Three almost certainly hit you, one depends on the use case. Short form first, detail after.

ParagraphAddresseeObligationTypical trigger
Art. 50(1)ProviderAI system that interacts directly with humans must be recognisable as AIChatbot, voice assistant, AI companion, copilot
Art. 50(2)ProviderSynthetic audio, image, video, text output must be machine-readably marked as AI-generatedImage generator, voice cloning, GPT output
Art. 50(3)DeployerEmotion recognition or biometric categorisation requires informing affected personsHR sentiment tool, call centre voice analysis
Art. 50(4)DeployerDeepfake image, audio, video must be labelled as artificially generatedMarketing avatar, AI voice in commercial, training video

Per Art. 50(5) the information must be delivered "in a clear and distinguishable manner at the latest at the time of the first interaction or exposure". Translated: before the first click, not in the footer, not in the ToS, not in the cookie banner scroll.

Diagram: Art. 50 obligations matrix Provider vs Deployer

Provider vs. Deployer: who labels what

The most common mix-up in the DACH Mittelstand. The obligations of paragraphs 1 and 2 hit the provider, meaning whoever builds, trains, or substantially modifies the AI system. The obligations of paragraphs 3 and 4 hit the deployer, meaning whoever uses the AI system professionally.

Concretely: if you buy ChatGPT Enterprise and integrate it into your customer support UI, you are the deployer. OpenAI is the provider. For Art. 50(1) (chatbot recognition) the obligation sits with OpenAI: the system must be designed to enable detectability. But if you embed it so the customer cannot tell they are talking to AI, the duty lands on you directly: the deployer disclosure under Art. 50(4) for AI-generated output and the obligation to keep the system's AI nature recognisable is yours. (Note: the value-chain rules in Art. 25 and the deployer duties in Art. 26 only kick in once the use case crosses into Annex III high-risk territory, typically not the case for a vanilla customer support bot.)

If you train a model yourself, fine-tune an open-source LLM on your domain, or bundle generative AI as white-label into your product: you are the provider. Then all four obligations sit with you.

Practical rule of thumb: if your logo is on the product UI and the end customer holds you liable, you carry the deployer duty, often the provider duty too.

Art. 50(1): Chatbot outing

The original text is sober: providers must ensure natural persons are "informed that they are interacting with an AI system". Exception: when this is "obvious from the point of view of a reasonably well-informed, observant and circumspect" person.

The exception is much narrower than 90% of LinkedIn posts claim. A chatbot with a robot avatar and the name "Bot Assistant Lisa" is not automatically obvious, because many customer support UIs use staff avatars and first names. Voice assistants that clone human voices are explicitly not obvious. Even the "Hi, I am the AI assistant" disclaimer at the start does not suffice if the conversation runs longer than a few turns: after 30 minutes the customer no longer remembers the initial notice.

What you concretely need:

  • Persistent "AI assistant" badge in the conversation UI, not just at the start.
  • Plain language: "You are chatting with an AI" or "AI Assistant", not "virtual employee" or "smart agent".
  • For voice: explicit announcement at the start and before every sub-flow that could imply a human (e.g. complaint escalation).
  • Audit trail: logging that the notice was served, with timestamp.

Exception within the exception: law enforcement tools (Art. 50(1) sentence 2). Irrelevant for 99.9% of the Mittelstand.

Art. 50(2): Synthetic content machine readability (C2PA)

This gets technical. Providers of AI systems that generate synthetic audio, image, video or text output must ensure that output is "marked in a machine-readable format and detectable as artificially generated or manipulated".

The practical standard the EU Commission, the AI Office and the major platforms are converging on is C2PA (Coalition for Content Provenance and Authenticity). C2PA embeds cryptographically signed metadata into the content container, exposing origin, authoring tool and edit history. Adobe, Microsoft, Sony, OpenAI, Google have committed. For JPEG, MP4, WAV, PNG, C2PA is the default. For pure text there is no unified standard yet, in practice many use provenance tags in HTML or watermark token patterns.

Exception: the model only performs "assistive editing" or does not substantially alter the input. Example: auto-correct in Word is not Art. 50(2), an AI-generated image from Midjourney is.

What you concretely need (as provider):

  • C2PA manifest for all generative outputs.
  • Watermark detection API that platforms can query.
  • Documentation that the watermark is "robust and reliable" (not killed by crop, re-save, JPEG recompression).
  • For text: provenance meta in HTML or a lookup-capable token signature scheme.

What you need (as deployer): nothing directly from paragraph 2, but paragraph 4 often kicks in once you forward the output to end customers.

Art. 50(3): Emotion recognition disclosure

Whoever deploys an emotion recognition system or a biometric categorisation system must inform the affected persons of the operation of the system, and must process the data per GDPR.

Sounds narrow, is not. What is emotion recognition? Any system that infers emotions or intentions from biometric data. Voice sentiment analysis in a call centre: yes. Burnout prediction from typing patterns: yes. Sentiment from facial expression in a sales call: yes. We see this in training tools, in customer experience platforms, in HR tech (even though HR use cases usually also fall under Annex III high-risk, which adds further duties).

Biometric categorisation: assigning natural persons to categories based on biometric data. If a system estimates age, gender or mood from a photo for marketing targeting: yes. Here Art. 50 overlaps heavily with the prohibition list in Art. 5 (categorisation by sensitive attributes such as religion, sexual orientation). If your system sorts on one of those sensitive categories: that is Art. 5, Tier 1 fine up to 35M.

What you concretely need:

  • Pre-interaction notice disclosing the sentiment tool: "This conversation is automatically analysed for tonality".
  • GDPR duties in parallel: legal basis (typically Art. 6(1)(f) with balancing test), notice per Art. 13 GDPR, deletion concept.
  • Opt-out path. Even if GDPR theoretically supports legitimate interest or contract, DACH supervisory practice on emotion recognition is extremely strict. More in our GDPR + Agentic AI Production guide.

Art. 50(4): Deepfake labelling

Deployers who generate or manipulate deepfake image, audio or video content must disclose that the content was artificially generated. This applies regardless of whether you are provider or just using a tool.

This is the paragraph that marketing departments will trip over en masse in 2026. Concrete cases:

  • AI-generated CEO avatar in the corporate film: deepfake label needed.
  • Voice-cloned voice in the explainer video (not the actual CEO): deepfake label.
  • AI-generated product spokesperson character: deepfake label, even if fictional.
  • Stock-photo-looking AI images in the blog header: paragraph 4 is softer here, because "deep fake" targets realistic imitation of real persons; but paragraph 2 (provider watermark) applies, and in practice "AI-generated" disclaimers are becoming standard.

The crucial exception: artistic, creative, satirical, fictional. But even then disclosure must occur "in a manner appropriate that does not hamper the display or enjoyment of the work", indicating that the content was generated. A hidden note in the credits suffices. Complete silence does not.

Often forgotten second sentence: AI-generated text that is "published with the purpose of informing the public on matters of public interest" must be disclosed as AI-generated. Exception: human review or editorial control. For your marketing blog posts that an intern quickly generates with GPT and publishes without review: disclosure obligation. If the marketing lead editorially owns every post: no obligation under paragraph 4 sentence 2 (but paragraph 2 provider watermark remains).

The exceptions nobody understands correctly

Three exception categories, three confusion traps.

Law enforcement (Art. 50(1) sentence 2, (3) sentence 2). Police and prosecutors may operate without disclosure, with safeguard clause. Irrelevant for 99.9% of the Mittelstand. Exception: you operate as IT service provider for authorities, then your contract must explicitly map this.

Artistic/satirical content (Art. 50(4) sentence 2). Reduces the duty, does not eliminate it. An "appropriate" disclosure that does not destroy the work remains. Subtitle hint "AI-generated", credits entry, or a discreet watermark. What does not suffice: no hint at all.

Machine-readable often is not enough. Paragraph 2 demands machine-readable marking (provider duty). But paragraph 4 demands "disclosure" for deepfakes, which under paragraph 5 must be "clear and distinguishable to the natural person". A C2PA tag in the background XMP does not suffice if the end user cannot see it. Double duty: machine-readable watermark and visible label.

What 02.08.2026 actually activates (vs. 02.08.2027)

The deadline is constantly cited incorrectly. The facts:

  • 02.02.2025: Prohibitions per Art. 5 (prohibited AI practices) and AI literacy duty per Art. 4 are active.
  • 02.08.2025: GPAI duties per Art. 53 ff. for new models are active. More in our GPAI obligations guide.
  • 02.08.2026: Sanctions for Art. 50 (transparency duties) become enforceable. High-risk duties for Annex III stand-alone systems (HR, education, credit scoring, critical infrastructure) are fully fineable. (The core governance structures, AI Office and national competent authorities, have already applied since 02.08.2025 per Art. 113(b).)
  • 02.08.2027: High-risk duties per Art. 6(1) for AI embedded in Annex I products (machinery, medical devices, toys, other Union harmonisation legislation) become fully effective.

Meaning: from 02.08.2026 supervisory authorities can impose fines for Art. 50 breaches. And they will. Bundesnetzagentur as the national supervisor in Germany has been building the staff and sanction pipeline since Q1 2026. Anyone running chatbots without disclosure banner on 03.08.2026 is the first target.

Fines anchor

A breach of Art. 50 is a breach of "duties of providers and deployers", which under Art. 99(4) means Tier 2: up to 15M EUR or 3% of global annual turnover, whichever is higher. Not 35M (that is Art. 5, Tier 1). Not 7.5M (that is Tier 3 for false information to authorities).

For a 200M revenue Mittelstand company that means 6M EUR maximum fine. And the SME special rule from Art. 99(6) kicks in: for small and medium-sized enterprises the lower of the two values applies, not the higher. For an 80M EUR mechanical engineering firm that would be "only" 2.4M EUR instead of 15M. Still a quarter of margin gone, plus reputational damage, plus follow-up injunction, plus parallel GDPR proceedings. Detailed math in our fines myth article.

5 immediate actions, 90 days

If you start now you make 02.08.2026. If you start in early July, you do not.

  1. UI audit of all customer-facing AI systems. Who interacts directly with end customers in the product? Chatbots, voice IVR, recommendations, AI mail replies. List with owner and disclosure status. Realistically two to three days of work for a 50-person company.

  2. Build the chatbot disclosure banner. Persistent badge in the conversation UI, German and English language versions, mobile tested. This is a frontend ticket, not a research project. One sprint suffices.

  3. Generative content watermarking plan (C2PA). Who is provider in your setup? If you train or fine-tune models yourself: C2PA manifest in the output pipeline. If you white-label a vendor model: review contracts, ensure watermark pass-through.

  4. Deployer inventory with labelling status. Duty from AI audit readiness: every AI system listed, classified (risk class), and Art. 50 obligations status (paragraphs 1, 3, 4). Goes straight into the 90-day audit readiness roadmap.

  5. Legal review of all marketing materials. Who produced AI avatars, voice clones, generated images in 2025? What of that is still live? Which materials need an "AI-generated" label? Rule of thumb: anything created with generative AI after mid-2024 must go through the filter.

FAQ

If my UI already has a "powered by AI" footer, does that suffice? No. Art. 50(5) demands "clear and distinguishable at the time of first interaction". Footers are not read. It must be pre-interaction, ideally persistent.

We use ChatGPT Enterprise only internally. Does Art. 50 hit us? Paragraph 1 (chatbot) yes, if employees are exposed and it is not obvious AI (which with custom GPTs and own names often is not). Paragraph 3 (emotion recognition) and paragraph 4 (deepfake) only hit you if you are in those use cases. But Art. 4 (AI literacy) hits you for purely internal use cases too.

Does a C2PA watermark suffice, or do I need both? Both, if you push deepfakes. C2PA is the machine-readable provider duty from paragraph 2. The visible label is the deployer duty from paragraph 4 in conjunction with paragraph 5. Supervisory practice: without a visible note the disclosure is not "clear and distinguishable".

What about existing customers who have used our AI since 2024? The 02.08.2026 deadline is not "from contract signature" but "from operating date". Anyone running productively without disclosure on 03.08.2026 is in breach, regardless of when the contract was signed.

Sources

  • AI Act Regulation 2024/1689, Art. 50 (original): artificialintelligenceact.eu/article/50
  • AI Act Service Desk EU Commission, Art. 50: ai-act-service-desk.ec.europa.eu
  • EU Commission, Code of Practice on AI-generated Content Marking and Labelling
  • C2PA Standard (Coalition for Content Provenance and Authenticity): c2pa.org
  • Bird & Bird, Draft Transparency Code of Practice (2026)
  • AI Act Regulation 2024/1689, Art. 99(4) (Tier 2 fine ceiling)

We run the UI compliance audit for Art. 50

14 days. We map every AI system in your product stack against the four Art. 50 obligations, deliver a frontend and contract action list, and leave you clean for the 02.08.2026 deadline. Co-built by a team that itself is provider and deployer.

Book a session

Related reading: AI Act 90-day compliance plan, GPAI obligations August 2026, Fines myth, AI audit readiness 90 days, GDPR + Agentic AI Production.

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Keep reading

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.