Skip to main content

All articles

AI Literacy Mandate from 2 August 2026: The Executive Checklist for DACH Mid-Market

AI Act Article 4: AI literacy mandate since 2.2.2025, enforcement from 2.8.2026. 3 training tiers, QCG funding up to 100% — almost nobody knows. What executives must do NOW.

Sebastian LangMay 4, 202611 min read

Key numbers at a glance

  • Deadline 2 August 2026: authority enforcement of the AI literacy mandate per AI Act Article 4. The mandate itself has been in effect since 2.2.2025, but sanctions only start from August 2026.
  • 3 training tiers: basic (2-4 hours) for employees with occasional AI contact (e.g. ChatGPT use), in-depth (8-16 hours) for regular AI users, specialist (40-plus hours) for AI responsibles and high-risk applications.
  • QCG funding: up to 100 percent coverage of training costs plus 75 percent salary coverage during training for SMEs. Expanded in 2026. Almost nobody knows.
  • 76 percent of DACH SME employees are not or insufficiently trained for AI according to industry surveys 2026. Most executives do not know they are obligated from August 2026.
  • Sanction spectrum: Article 4 itself has no direct fine schedule in the AI Act — Art. 99(7) delegates sanctions to member states. In Germany, no specific Art. 4 fine catalogue is published as of May 2026. Likely 2026/27 trajectory: administrative proceedings with warnings and remediation deadlines first, then fines, probably anchored to the general Art. 99(3) ceiling (up to 15 million EUR / 3 percent of group turnover; SMEs to the lower of the two).

If you are a managing director, CEO or HR lead at a DACH mid-market company in 2026 answering "do we have the AI literacy mandate under control?" with "I'm not exactly sure," then this post is for you. It delivers the compliance checklist that must be ticked off by 2 August 2026, plus the funding levers almost nobody uses, plus the typical three mistakes in DACH mid-market.

The central thesis: AI literacy mandate per AI Act Article 4 is no longer a bureaucracy exercise in 2026 but a compliance deadline that touches managing director responsibility. Authority enforcement begins on 2 August 2026. Whoever ignores risks fines and personal liability. Whoever approaches this cleverly combines compliance with talent buildup and uses QCG funding for up to 100 percent cost coverage.

This post delivers the 5-step checklist that you can work through in 4 to 8 weeks.

Who this post is for and who it is not

This post is for managing directors, CEOs, HR leads and compliance responsibles in DACH mid-market (30 to 500 FTE) who have so far either ignored the AI literacy mandate or classified it as "we'll do it next year." The deadline 2 August 2026 is close enough that action is needed now and far enough that structured implementation is possible.

Not a fit for companies that already have a dedicated compliance setup for AI Act. For those our EU AI Act 90-day plan is the better entry, going beyond Article 4 and treating Annex III high-risk classification.

What the AI literacy mandate concretely requires

Legal basis: Article 4 of the EU AI Regulation (AI Act). In effect since 2 February 2025. Authority enforcement begins 2 August 2026 (deadline for general sanctions, high-risk systems separately).

Who is affected: all "providers and deployers" of AI systems — that includes mid-market companies using ChatGPT, Microsoft Copilot, industry-specific AI tools or self-developed AI agents. Whoever deploys no AI system is not affected — but that is the exception in 2026.

What must be done: employees with AI contact must have "sufficient AI literacy." The regulation defines no concrete hour count but requires "appropriate" training "regarding the type of systems, the deployment context and the persons concerned." In practice a 3-tier model has established itself.

Documentation obligation: trainings must be documented. That means: which employees were trained, when, with which content, with which test result. Without documentation, in doubt it counts as: not trained.

Sanction spectrum: Article 4 itself has no direct fine schedule in the AI Act — Art. 99(7) delegates sanctions to member states. In Germany, no specific Art. 4 fine catalogue is published as of May 2026; authorities will first run administrative proceedings with warnings and remediation deadlines. Realistic worst-case 2027/28 (mirroring the GDPR enforcement curve): fines in the low six-figure range, repeat offenses anchored to the general Art. 99(3) ceiling (up to 15 million EUR / 3 percent of group turnover; SMEs to the lower value). Personal managing director liability is possible depending on severity.

The 3 training tiers from practice

From Sentient engagement practice 2026, aligned with Mittelstand Digital Centres, Bitkom Academy and legal advisory (Kupka & Stillfried, Kliemt). Three tiers, staggered by AI contact intensity.

Tier 1: basic training (2-4 hours) for occasional AI users

Who: employees who occasionally use AI tools like ChatGPT, Microsoft Copilot, Google Workspace AI for work tasks but are not dedicated AI responsibles. Typically 60-80 percent of staff in DACH mid-market 2026.

Contents:

  • What is AI and how do LLMs work (1 hour)
  • Data protection and confidentiality with AI use (which data must NOT be input) (30 minutes)
  • Hallucination detection and critical questioning of AI output (1 hour)
  • Company policies on AI use (which tools allowed, which forbidden) (30 minutes)
  • Optional: hands-on exercise with allowed tools (1 hour)

Format: online modules plus 1 live workshop. Recognized are Bitkom Academy, Mittelstand Digital Centres, IHK trainings, external providers with documented curriculum validation.

Costs: 50 to 200 EUR per employee for external training. For internal training with external speakers roughly 30 to 80 EUR per employee. With QCG funding practically zero net.

Tier 2: in-depth training (8-16 hours) for regular AI users

Who: employees who deploy AI tools daily for core work tasks — sales staff with AI-CRM integration, marketing managers with AI content tools, engineers with coding agents, customer service agents with AI support. Typically 15-25 percent of staff.

Contents (in addition to tier 1):

  • Prompt engineering basics and patterns (3 hours)
  • Tool-specific deepening (e.g. Microsoft Copilot power user, ChatGPT Enterprise skills) (3 hours)
  • Output quality assessment and feedback loops (2 hours)
  • Workflow integration and KPI measurement (2 hours)
  • Hands-on practice with use case work (4 hours)

Format: several workshop days plus self-learning modules plus practice tasks. Recognized: Bitkom Academy certificates, specialised providers, internal programmes with documented curriculum.

Costs: 500 to 2,000 EUR per employee. With QCG funding roughly 0 to 500 EUR net.

Tier 3: specialist training (40-plus hours) for AI responsibles

Who: AI champions, AI engineers, data engineers with AI components, AI governance responsibles, IT security with AI responsibility, compliance officers with AI responsibility. Typically 2-5 percent of staff.

Contents (in addition to tier 2):

  • Skill library architecture and CLAUDE.md conventions (8 hours)
  • Permissions setup and audit trail (4 hours)
  • KPI framework with DORA metrics (8 hours)
  • AI Act compliance in detail (Annex III, high-risk classification, data protection impact assessment) (8 hours)
  • Drift detection and output sampling (4 hours)
  • Multi-provider setup and vendor diversity (4 hours)
  • Hands-on project with productive use case (4-plus hours)

Format: multi-week specialist programmes, often with external implementation practice. Recognized: university programmes (TU Munich, Karlsruhe Institute), specialised boutique providers (Sentient, other AI engineering specialists), internal programmes with engineering mentoring.

Costs: 2,000 to 8,000 EUR per employee. With QCG funding roughly 0 to 4,000 EUR net.

QCG funding: the underestimated lever

What it is: the Qualifizierungschancengesetz (QCG) is a funding instrument of the German Federal Employment Agency. It covers training costs and partially salary shares during trainings. Explicitly applicable for AI training since 2024, expanded in 2026.

Funding level:

  • SMEs with under 10 employees: up to 100 percent of course costs plus up to 75 percent salary coverage during training time
  • SMEs 10-249 employees: up to 50 percent course costs plus up to 50 percent salary coverage
  • Larger companies 250-2,499 employees: up to 25 percent course costs plus 25 percent salary coverage

Prerequisites: fundable training (list of certified providers via the Federal Employment Agency), application BEFORE training start, at least 4 hours training scope. Application can be made by management or via external funding advisor.

Why almost nobody uses this: the funding has been there since 2019 but most executives in mid-market do not know it. AI-specific funding is explicitly documented only since 2024. In Sentient engagement advisory we see that roughly 70 percent of mid-market does not know or use the funding.

Immediate step: Mittelstand Digital Centres advise free of charge on funding options. A call to your regionally responsible centre (list via www.mittelstand-digital.de) is the simplest entry, costs nothing, takes 30 minutes.

60-minute sparring on your AI literacy strategy →

The 5-step checklist for executives

Step 1 (1 week): shadow IT inventory and employee classification. Survey to all employees: "which AI tools do you currently use for work tasks?" Classification of employees into three tiers: occasional (tier 1), regular (tier 2), responsible (tier 3). Output: list with employee count per tier.

Step 2 (1 week): funding advisory at Mittelstand Digital Centre. Free phone call to your regional centre (list via mittelstand-digital.de). Clarification: which trainings are fundable, how is QCG application, which regional providers are recommended. Output: funding strategy with expected cost coverage.

Step 3 (2 weeks): select training providers and submit application. Selection of training providers per tier (Bitkom Academy, IHK, specialised providers). QCG application BEFORE training start. Output: training plan with date, provider, funding approval.

Step 4 (4-8 weeks): conduct trainings and document. Conduct trainings of all three tiers. Documentation: which employee, which training, when, with which test result. Test results need not be 100 percent but documented attempt and minimum level (typically 70 percent). Output: training documentation for compliance file.

Step 5 (ongoing): maintain compliance file and refresh annually. Training documentation in compliance file (digital or physical), accessible for authorities on request. Annual refresh for tier 1, semi-annual for tier 2, continuous for tier 3. Output: audit-proof compliance file.

The typical three mistakes in mid-market 2026

Mistake 1: ignoring training obligation because "we only use ChatGPT." That is exactly the use case that falls under Article 4. When employees use ChatGPT for work tasks they are "deployers of an AI system" in the sense of the AI Act. Ignoring works until 2 August 2026, after that the risk begins.

Mistake 2: counting a 1-hour generic AI awareness training as sufficient. "We sent a 1-hour online slide to everyone" is not sufficient under Article 4. The regulation requires "appropriate" training — 1 hour for regular AI users is not appropriate. Minimum level tier 1 (2-4 hours) for occasional users, tier 2 (8-16 hours) for regular.

Mistake 3: not using QCG funding. Whoever trains without QCG pays 100 percent. Whoever trains with QCG pays 0-50 percent depending on company size. The difference for a 100-employee company with tier 1 training for 80 employees and tier 2 for 15 employees: roughly 30,000 to 60,000 EUR funding volume not claimed. More in talent crisis post.

What concretely happens from 2 August 2026?

Phase 1 (August-December 2026): authority attention, first inquiries. Responsible authorities in Germany (Bundesnetzagentur as coordination body, plus sector-specific authorities) will make first sample inquiries. Goal: sensitisation, not fine wave. Whoever presents training documentation on inquiry is in the green zone.

Phase 2 (2027): first warnings and remediation deadlines. Companies without documented training receive warnings with remediation deadlines (typically 6 months). Whoever does not remediate in the deadline risks first fines.

Phase 3 (2027-2028): first significant fines. With repeated violations or with deliberate ignoring, fines in the low 6-digit range are expected. The upper limit is not directly set in the AI Act for Art. 4 (see Art. 99(7), delegation to member states); concrete ceilings will likely anchor on the general Art. 99(3) frame — up to 15 million EUR / 3 percent of group turnover, SMEs to the lower of the two.

Strategic recommendation: whoever has worked through the 5-step checklist by Q3 2026 is on the safe side. Whoever has not made it by Q1 2027 risks phase 2 warnings.

EU AI Act 90-day plan: complete compliance roadmap →

Frequently asked questions

We have under 10 employees, do we need this? Yes, as soon as one of your employees uses ChatGPT, Copilot or another AI system for work tasks. The obligation is not coupled to company size but to use. But: you get 100 percent QCG funding so the training becomes practically free.

What if our employees use AI privately but not professionally? Then they are not affected. But: in practice the separation is difficult. Whoever uses ChatGPT on private account for the answer to a customer acts professionally. Safer to give all employees with potential contact a tier 1 basic training — compliance safety plus QCG funding.

What does a complete compliance setup cost for a 100-employee company? Tier 1 for 80 employees: 4,000 to 16,000 EUR gross, with QCG funding 2,000 to 8,000 EUR net. Tier 2 for 15 employees: 7,500 to 30,000 EUR gross, with QCG 3,750 to 15,000 EUR net. Tier 3 for 5 employees: 10,000 to 40,000 EUR gross, with QCG 5,000 to 20,000 EUR net. Total gross: 21,500 to 86,000 EUR. With QCG: 10,750 to 43,000 EUR. Plus 5,000 to 15,000 EUR internal setup (HR, documentation, application).

What if we have not officially rolled out ANY AI tools so far? Then your compliance risk is lower but your shadow IT risk is high (employees use tools via private account, untrained, with data protection risk). Immediate step: shadow IT inventory, then targeted tool selection with simultaneous training. More in AI maturity check.

Who in management is personally liable? With compliance violations primarily the managing director / board member who is responsible (in Germany typically CEO or CTO depending on business distribution). With deliberate ignoring or with documented knowledge despite non-action, personal liability can apply depending on severity and damage. Realistic in 2026/27: fines go primarily to companies, personal liability only with gross violations.

How fast can we implement this in 4-8 weeks? Realistic if management mandate is in place and HR lead or external funding advisor handles QCG application. Steps 1-2 (inventory and funding advisory) in 2 weeks, step 3 (provider selection and application) in 2 weeks, step 4 (trainings) in 4 weeks if parallelized. Total 4-8 weeks net.

AI talent crisis: how 200-FTE beat Tesla, Google and Berlin startups →

Sources


About the author

Sebastian Lang is co-founder of Sentient Dynamics and leads the Agentic University programme. Before Sentient he was responsible for AI workforce programmes at SAP's Strategy Practice with 15+ years of engineering leadership experience. Sentient Dynamics works on a success-based compensation model and is deployed across the SHD and Bregal portfolios.

Subscribe to the newsletter | Sebastian on LinkedIn

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Keep reading

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.