Skip to main content

All articles

The Workforce Pyramid: 8%, 21%, 43%, where your Mittelstand sits in the AI training pyramid (2026)

Bitkom 2026: 8% train everyone, 21% train a majority, 43% train nobody. The 5-step pyramid and why steps 1+2 are not defensible after August 2026.

Sebastian LangMay 10, 202611 min read

8 percent of German companies with 20 or more employees train their entire workforce on AI. 21 percent train a majority. 43 percent offer no AI training at all. Those numbers come from the Bitkom AI Study 2026 (data collected end of 2025, published February 2026, n=604 companies with 20+ employees). Where does your Mittelstand sit in this pyramid, and which step is still acceptable in 2026?

We see the same reflex almost every week in our DACH workshops: "We are already doing something with AI, a few people have Copilot." That is step 2 in the Bitkom logic, and it is no longer enough. The EU AI Act has, since 02 February 2026, required every company that deploys AI to ensure demonstrable AI competence among its employees (AI Act Article 4). The sanctions regime kicks in on 02 August 2026. If you are sitting at step 1 or step 2 and AI is already in production use, you have an open compliance exposure that cannot be defended with "training is planned".

The 5 steps of the workforce pyramid

The table gives you the Bitkom answer categories 1:1, plus the operational reading we have derived from workshops with DACH Mittelstand companies.

StepBitkom answer categoryShareWhat this means operationally
5Training for all employees8%Full workforce, structured curriculum
4Training for a majority21%Majority, typically 50-80% of workforce
3Training for selected employees25%Pilot teams, champions, specific departments
2No training yet, but planned4%Decided, no rollout date
1No training, no offering43%No strategy, no bookings

The remaining 3 percent are "don't know / no answer" in the Bitkom survey, which is why the table sums to 101, due to rounding. Important for an honest read: "company enables AI training" (the press headline "every second company enables AI training") aggregates steps 5+4+3, i.e. 54 percent. "Actually trains the majority of the workforce" is only steps 5+4, i.e. 29 percent. That is a meaningful difference that often gets compressed in secondary coverage.

Workforce AI training pyramid per Bitkom 2026: 8 percent train all, 21 percent train majority, 25 percent train selected, 4 percent planned, 43 percent no training

Why steps 1 and 2 are not defensible after August 2026

The AI Act has been in force since 01 August 2024, but its provisions phase in over time. Article 4 (AI competence of employees) became effective on 02 February 2026, and the national sanctions mechanism activates on 02 August 2026. If your company deploys or operates AI systems (and that includes Microsoft Copilot, ChatGPT Enterprise, Salesforce Einstein, any internal RAG tool), you are required to ensure "a sufficient level of AI literacy" across the employees who interact with those systems.

What does that mean operationally? There is no hard certification requirement, but you must be able to demonstrate that your workforce understands the AI systems they use, can assess the risks and can handle them responsibly. That is an evidence question, not a slideware question. If you are at step 1 (43 percent) while simultaneously rolling out Copilot, you have an open exposure. Step 2 (4 percent, planned) is almost equally weak in legal terms: "we intend to" does not constitute proof of competence if the usage is already live.

We have unpacked the sanctions regime in a separate piece, because several consultancies are currently quoting the wrong fines (35M / 7 percent only applies to Article 5 prohibited practices, HR applications sit at 15M / 3 percent, false information to authorities at 7,5M / 1 percent). If you need the compliance mechanics in detail, read the AI literacy mandate August 2026 executive checklist. For this piece, the takeaway is: steps 1 and 2 are not defensible after 02 August 2026.

What step 5 does differently: the 8-week AI champions logic

The 8 percent at step 5 have a structural advantage that most Mittelstand companies underestimate: they run a coherent curriculum, not just a pilot team. From our workshops we see three recurring patterns in these companies.

First, they separate role-specific modules: one module for executives and top management (what you do not need to do yourself, what you do need to be able to evaluate), one module for business departments (sales, marketing, finance, HR, each with its own use cases), and one module for IT and engineering (prompting, RAG fundamentals, tool selection, security). Generic "AI for everyone" webinars fail because the CEO does not need the same content as the SAP administrator.

Second, they anchor on champions. A 50-person company typically needs 4-6 internal AI champions who act as the first point of contact for their area. These are not full-time roles, they are 10-20 percent time budgets carved out of existing staff. The champions go through an intensive 8-week track (we call it the champions wave at SHD and Tesy), and they distribute the knowledge into their teams. This sidesteps the classic training-scalability problem.

Third, they measure. Step-5 companies track tool adoption, prompt quality and hours saved per use case. A company without metrics has not produced a training effect, it has produced a training list.

The most important takeaway for Mittelstand leaders: step 5 is not more expensive than step 3, it is more structured. We have seen 8-to-12-week champions programs land between 25.000 and 60.000 EUR for Mittelstand companies up to 250 employees. That is a very cheap insurance policy in the AI Act compliance context.

Mittelstand reality: why 72% are still below step 4

8 percent at step 5 plus 21 percent at step 4 add up to 29 percent. Inverted: 72 percent of German companies with 20 or more employees either train only selected staff (25 percent), have vague plans (4 percent), do nothing at all (43 percent) or do not know (3 percent). Among Mittelstand companies under 250 employees, the distribution skews even harder, because most step-5 cases sit in larger organisations.

From roughly 40 workshop days we ran in 2025 and 2026 with DACH Mittelstand clients, we see three recurring reasons why companies get stuck at step 2 or step 3. First, no owner: HR says "this is really IT", IT says "this is really HR", and both wait for the other. Second, no curriculum logic: trainings get booked ad hoc when a vendor calls, instead of planned structurally. Third, no budget anchor: without a number in the annual budget, every training booking turns into an approval battle.

None of these factors is expensive to fix, but they all require a CEO-level decision. As long as "we will get to it soon" remains the answer, nothing moves. We also see a fourth reason that gets said out loud less often: leadership teams systematically underestimate how far their workforce has already moved with privately used AI tools. If you do not know where your employees stand, you cannot build a curriculum that fits, that is the direct bridge to the McKinsey gap which we cover further down.

On size structure: within the Bitkom sample, the share of step-4-and-5 cases rises with company size. Below 100 employees the share of "trains all or a majority" is lower than the overall average of 29 percent. At 500+ employees it sits clearly above. Practically: if you are an 80-employee Mittelstaendler at step 3, you are already above-average for your peer group, but above-average against a weak average is not a compliance argument.

What Sentient recommends: the 30-60-90 training plan

For Mittelstand clients sitting at step 1, 2 or 3 today, we use a pragmatic 30-60-90-day plan that shifts the pyramid by one or two steps without blocking operations.

Day 0 to 30, diagnosis and owner. In the first 30 days you decide three things: who owns this (ideally HR lead with IT alignment, not the reverse), which AI tools are already in use (officially and unofficially, that is the shadow-AI question), and which two role clusters you train first. We recommend executives plus one business department with a clear use case, typically sales or finance.

Day 31 to 60, champions and pilot curriculum. In the next 30 days you identify 4-6 champions (depending on workforce size), book an external vendor for the champions track and define the curriculum for the broad rollout. Important: the champions deliver, they are not just participants. By the end of the track they produce the training content for their own teams (with your vendor coaching them).

Day 61 to 90, breadth rollout in the first wave. In the final 30 days you roll out the training to the first business department broadly. Realistic batch size is 20-50 employees per wave, with clearly defined module length (90-180 minutes per session, no more) and a per-participant success metric (typically: three use cases automated and documented in their own daily work).

After 90 days you have step 3 stable and you are on the path to step 4. Step 5 is a 9-to-12-month journey, not a 90-day journey. Anyone promising step 5 in 90 days is selling you a webinar.

Bridge: why this plan fails without an employee reality check

Before you start the 30-60-90 plan, you need an honest answer to a second question: what are your employees actually doing with AI today, beyond what you officially know? McKinsey measured in 2025 that C-suite leaders underestimate the AI usage of their employees by a factor of 3, which is the McKinsey gap. We have unpacked this with Bitkom data in a separate piece.

Why is this critical for your training plan? Because a training that introduces tools your employees have been using privately for 18 months loses them. And because a training that addresses risks already real in your company (data protection violations, IP leaks, prompt injection attempts) creates traction that generic "what is ChatGPT" slideware never produces. Read the McKinsey gap before you lock in your curriculum. The two pieces are designed as a pair.

The 3 typical training mistakes in the DACH Mittelstand

Mistake one, the one-shot webinar. A 2-hour webinar for every employee, from CFO to apprentice, delivered by a vendor, with the same slides. Nothing happens afterwards, because the content was too broad to be concrete and too thin to change behaviour. This is the most common training format at step 3, and statistically the Bitkom survey counts it as "trained". In an AI Act review it does not hold up.

Mistake two, the tool without a curriculum. Copilot gets rolled out, a license for everyone, an internal wiki with links to Microsoft docs, done. The usage rate is below 15 percent after six months, and the champions are the people who were already playing with ChatGPT on their own. You have tool adoption without training effect.

Mistake three, champion self-education. "We have three highly motivated people who are figuring this out themselves." It works for a quarter and then stalls, because internal champions without external coaching plateau on what they already know. They do not know what they do not know, and they do not have time to chase new tool releases on top of their day job.

FAQ

Are the 5/21/49 numbers in the Bitkom press release the same as the 8/21/25/4/43 here?

No. The 5 percent figure in the Bitkom press release refers to a different question (companies actively recruiting AI-skilled specialists), not to the training question. The 49 percent figure in some secondary reports is a rounded aggregate of "any form of training offered" (steps 5+4+3 = 54 percent, we use the exact 54 percent from the original study). The accurate pyramid is 8/21/25/4/43, all numbers from Bitkom Study 2026, Figure 27.

Which step is the legal minimum after August 2026?

The AI Act Article 4 does not specify a quantitative minimum, it sets the qualitative bar at "sufficient level of AI literacy". In practical interpretation that means: every person who uses AI in their work must be competent. A company that rolls out ChatGPT Enterprise to 200 employees and only trains 20 of them has a legal problem.

Does the AI Act apply to small Mittelstand companies under 50 employees?

Yes. The AI Act applies to all companies that deploy or operate AI systems, regardless of headcount. There are relaxed obligations for SMEs in some areas (sandbox access, documentation effort), but Article 4 AI literacy applies to everyone.

How do we measure whether our training reaches the Bitkom "trains a majority" standard?

In our audits we use three indicators: reach (share of workforce that has completed a modular training with a test, target above 50% for step 4), recency (refresh cycle under 12 months) and application (documented use cases per participant). Bitkom does not measure at that granularity, but the operational definition aligns.

Sources and next step

We run a workforce pyramid audit with your HR lead and executive team. One day in which we lock down where you sit on the 5-step scale, against the Bitkom benchmark, with a concrete 90-day upgrade plan as the deliverable. If you are at step 1, 2 or 3 today and AI tools are already in production, that is the risk conversation that matters after August 2026. Book a slot.

Sources:

  • Bitkom Study "Artificial Intelligence in Germany 2026" (data collected end of 2025, published February 2026, n=604 companies with 20+ employees). Figure 27, p. 39, question "Do you train your employees in the use of AI?".
  • Bitkom press release "Every second company enables AI training", February 2026.
  • Regulation (EU) 2024/1689, Article 4 (AI literacy), effective 02.02.2026, sanctions from 02.08.2026.

Related posts:

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Keep reading

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.