EU AI Act Fines: The 35 Million Number Is Usually Wrong. What Mittelstand CEOs Actually Risk in 2026
The 35 million figure only applies to Art. 5. Mittelstand companies face Tier 2 (15M/3%) or Tier 3 (7.5M/1%). Plus: § 43 GmbHG personal liability.
The number that appears in every second LinkedIn post is 35 million. It is wrong. More precisely, it is the maximum penalty for a very narrow category of violations that a 200-employee Mittelstand company in Andernach, Heilbronn or Bielefeld practically never commits. The actual fine structure of the EU AI Act has three tiers. And for managing directors, the most explosive number does not come from Brussels at all. It comes from § 43 of the German GmbHG. What you actually risk is in Art. 99(3) to (6), in Art. 101, and in § 43 GmbHG. Anyone sharing a 35-million headline has not read the source text.
The key numbers at a glance
| Tier | Fine | Trigger |
|---|---|---|
| Tier 1 (Art. 99(3)) | up to 35M EUR or 7% global annual turnover, whichever is higher | Violation of Art. 5 (prohibited AI practices) |
| Tier 2 (Art. 99(4)) | up to 15M EUR or 3% global annual turnover | Provider and deployer obligations (Art. 16, 22-25, 26, 31, 33, 50) |
| Tier 3 (Art. 99(5)) | up to 7.5M EUR or 1% global annual turnover | False, incomplete or misleading information to authorities or notified bodies |
| SME exception (Art. 99(6)) | the lower of the two values applies, not the higher | SMEs and startups |
| GPAI separate (Art. 101) | up to 15M EUR or 3% | Separate Commission procedure against GPAI providers |
| § 43 GmbHG | reach into personal assets | Organisational fault when no governance structure exists |
The deadline that matters: 02 August 2026. From that date the sanction regime kicks in fully for high-risk systems. Art. 5 prohibitions have been live since February 2025.
The 5 myths that cost you money
Myth 1: "The AI Act costs 35 million."
What actually applies: 35 million or 7% of turnover only kicks in for violations of Art. 5. Those are the eight prohibited practices: social scoring by public authorities, manipulative subliminal techniques, exploitation of vulnerabilities, real-time biometric identification in public spaces, emotion detection at work or in education, predictive policing based purely on profiling, and untargeted scraping for facial databases. Anyone running a customer-service chatbot in a mid-sized machine-building company is not even close. The headline fits frontier labs, platform corporations and government bodies. Not your plant in Andernach.
Myth 2: "Mittelstand pays like a corporate."
What actually applies: Art. 99(6) explicitly states that for SMEs and startups, the amounts are read so the lower of the two values (fixed amount or turnover percentage) applies, not the higher. That is the exact opposite of corporate logic. For a Mittelstand company with 80M EUR in turnover, Tier 2 is not 15M but 2.4M (3% of 80M). Tier 3 is not 7.5M but 800,000 EUR. Still serious, but not insolvency-trigger territory. This norm is the only place in the text where Brussels visibly priced the Mittelstand into the framework. Any management team that does not know Art. 99(6) is paying compliance consultants more than necessary.
Myth 3: "Deployer violations bring 35 million."
What actually applies: deployer obligations sit in Art. 26. The sanction range for Art. 26 is Tier 2 (15M / 3%, lower value for SMEs). That is the relevant corridor for 90% of the Mittelstand, because you are almost never a provider (you rarely build models) but regularly a deployer (you use Copilot, Claude Enterprise, ChatGPT Enterprise, or industry-specific high-risk applications). The obligations there are operational: logging, human oversight, impact assessment for sensitive applications, employee information when used in employment contexts. That is compliance work, not Tier-1 drama.
Myth 4: "The managing director is not personally liable, the company pays."
What actually applies: § 43 GmbHG. Managing directors are liable to "the company" for damages caused by breach of their duty of care. With AI governance that does not exist, the legal label is organisational fault. If the GmbH pays a 1.5M Tier-2 fine, it can claim the damage back from you as managing director, provided you established no governance structure. D&O policies typically do not cover fines. For German AGs, § 93 AktG applies analogously. Anyone running 200 employees and not regulating shadow AI is not risking 35 million. They are risking their personal assets via § 43 GmbHG. That is the number that gets sober treatment in CFO meetings, not the 35 in the LinkedIn header.
Myth 5: "GPAI fines fall under Art. 99."
What actually applies: Art. 101 is a separate procedure. The European Commission, not national authorities, can impose fines of up to 15M or 3% on GPAI providers (OpenAI, Anthropic, Google, Mistral, Aleph Alpha). That runs in parallel to the national Art. 99 framework and practically does not affect Mittelstand companies directly. But: if your GPAI provider is sanctioned and service breaks, that is a vendor risk. Vendor concentration risk belongs in the risk report, alongside data protection and cyber.
| Myth | What actually applies | Source |
|---|---|---|
| AI Act = 35M fine | 35M / 7% only for Art. 5 prohibitions | Art. 99(3) |
| Mittelstand pays like corporate | SMEs: lower value applies | Art. 99(6) |
| Deployer = Tier 1 | Art. 26 falls under Tier 2 (15M / 3%) | Art. 99(4) |
| MD not personally liable | § 43 GmbHG: reach for organisational fault | § 43 GmbHG |
| GPAI fines = Art. 99 | Separate procedure in Art. 101 | White & Case |
What actually triggers the 35 million: the Art. 5 list
The Tier-1 framework applies to prohibited AI practices. Most of them are exotic in the Mittelstand. But three constellations can show up in a normal industrial company without anyone noticing.
Emotion detection at work or in educational settings. Prohibited under Art. 5(1)(f). That means: an employee monitoring tool that analyses voice or facial expression to assess stress, attention or "engagement" is not allowed. Not even in a call centre. Not even in training rooms. Exception: medical and safety reasons. Anyone buying a US "productivity analytics" tool that taps webcam or microphone has a Tier-1 problem.
Real-time biometric identification in publicly accessible space. Prohibited under Art. 5(1)(h). Reception desks running live face recognition against an employee database that also captures visitors are borderline. Plant entrances with live matching against external databases are out. Access control with employee badge plus photo verification in a closed system is fine. The difference lies in "publicly accessible" and "real time".
Untargeted scraping to build facial databases. Prohibited under Art. 5(1)(e). Anyone building an "internal recruiting AI" that mass-scrapes LinkedIn profiles including photos to build a candidate database is in. Even if the tool was sold as "sales intelligence".
All other Art. 5 prohibitions (social scoring by public authorities, manipulative subliminal techniques, predictive policing based purely on profiling) are foreign to the Mittelstand. But the three above are underestimated, because they are sold as standard SaaS.
What will actually hit the Mittelstand: Tier-2 deployer obligations
Art. 26 is the norm where Mittelstand companies live operationally. The obligations:
- Retain logs (para. 6): logs of a high-risk AI system for at least six months, unless other law requires otherwise.
- Ensure human oversight (para. 2): persons exercising oversight must have the necessary competence, training, authority and support.
- Inform employees (para. 7): before deploying a high-risk AI system at the workplace, employees and their representatives must be informed.
- Run GDPR DPIA (para. 9): where required under GDPR Art. 35.
- Fundamental rights impact assessment (Art. 27): for certain deployer constellations, mainly public bodies and banks / insurers running risk-scoring.
- Inform the provider (para. 5): for serious incidents.
Violation = Tier 2. For a Mittelstand with 80M EUR in turnover, Art. 99(6) translates that to up to 2.4M. Realistic outcome: fines in the first wave will grow out of the concrete case. The national authority (in Germany: Bundesnetzagentur as central body plus sector-specific supervisors) will rarely swing the hammer at a 200-employee Mittelstand on a first violation with cooperative behaviour. But the threat scenario matters, because it feeds into the compliance contract with customers, banks and insurers.
§ 43 GmbHG: why CFOs and managing directors must stay alert
This is the underestimated risk axis. § 43 GmbHG requires managing directors to apply "the diligence of a prudent businessman". For a technology that changes the company across all departments, AI governance is part of prudent management. Anyone who fails to establish a governance structure risks organisational fault. Three typical triggers:
Shadow AI without inventory. Employees use ChatGPT, Claude, Copilot for sensitive tasks. Nobody knows what data lands where. When a violation surfaces (GDPR breach via prompt, trade-secret leak), the question is not whether but at what level the MD is personally liable.
No risk classification of deployed systems. Which tool is high-risk under Annex III? Which is not? If the MD cannot answer that, or has never had it answered, this is, in case of dispute, organisational fault.
No training in the sense of Art. 4. AI literacy has been mandatory since February 2025. Anyone ignoring that and standing in a 2027 dispute has a weak defence position.
D&O policies typically do not cover EU AI Act fines. As of 2026 most policies explicitly exclude "penalties, fines, sanctions". What they may cover: defence costs, internal liability claims by the GmbH against the MD, reputation damage. But the actual fine remains payable, and the recourse under § 43 GmbHG is not automatically insured. Anyone who skips the insurance review in 2026 is flying blind.
What you do this month
Three steps. They will not deliver full compliance before 02 August 2026, but they create a defendable line against § 43 GmbHG.
-
AI inventory with risk classification. List of all deployed AI tools (official and shadow AI). Per tool: provider, use case, risk class (prohibited / high-risk / limited / minimal), deployer obligations. A template and 90-day plan are in the audit readiness post.
-
Training record for Art. 4. At minimum management, compliance, IT lead, data protection documented as trained. Details and obligations are in the AI literacy post.
-
Governance resolution by management. Written MD resolution on responsibilities, approval process for new AI tools, incident reporting process. That is the central building block against organisational fault. The GPAI obligations context for August 2026 is in the GPAI deployer post, the GDPR angle in the agentic AI GDPR post, the liability question for hallucinations in the liability post, the shadow AI context in the Bitkom post, and the 90-day compliance plan for engineering in the engineering plan.
Bottom line
Mittelstand does not pay like Daimler. Art. 99(6) says lower, not higher. But anyone who does not know the norm, has no governance, and lets shadow AI run, turns a Tier-2 risk into a § 43 GmbHG case. The real number that belongs in CFO meetings is not 35 million from Brussels. It is your personal assets from Berlin.
About the author
Sebastian Lang
Co-Founder · Business & Content Lead
Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.