Skip to main content

All articles

Where Do I Stand? The 15-Minute AI Maturity Check for DACH Mid-Market

Only 28% of enterprises have mature agent capabilities, in mid-market about half. 5-stages framework with concrete questions per stage and stop-light evaluation — honestly tested in 15 minutes.

Sebastian LangMay 4, 202610 min read

Key numbers at a glance

  • 5 stages of AI maturity according to Janea Systems / Marketresearch 2026: exploring, embedded, optimized, differentiating, transforming. 25 percent of enterprises sit in stage 1 (exploring).
  • 28 percent of enterprises have mature agent capabilities according to Deloitte State of AI 2026. In DACH mid-market it is roughly 14 percent — half.
  • 47 percent operating margin spread between stage 4-5 and stage 1-2. Whoever stays in stage 1-2 loses roughly 15 percentage points competitive position per year. More in the AI lead post.
  • 3 dimensions of maturity evaluation: data, technology, organisation. Plus two cross-cutting dimensions: talent and governance. Realistic evaluation takes 15 minutes honestly, 3 hours with workshop depth.
  • 76 percent of SMEs struggle with insufficient data quality according to industry surveys 2026. Data stage is usually the lowest stage of an org.

If you are a managing director, CIO or Head of Operations in DACH mid-market in 2026 wanting to honestly answer "where do we stand on AI?" you need a framework. "We already do something with ChatGPT" is not an answer. This post delivers the 5-stages framework that we use in Sentient engagements 2026 as self-test and workshop foundation, with concrete questions per stage and stop-light evaluation.

The central thesis: AI maturity can be measured and it correlates directly with operating margin performance. Stage 4-5 companies achieve 47 percent higher margins than stage 1-2 companies (see AI lead post). Whoever does not know which stage they are in cannot invest with focus in 2026.

In 15 minutes of honest self-evaluation you reach a robust result. This post is the guide.

Who this post is for and who it is not

This post is for managing directors, CIOs, CTOs and Head-of-Operations in DACH mid-market (30 to 500 FTE) who want to define an AI strategy for 2026 or 2027 or evaluate an existing programme. Concretely: either you have not yet started an AI initiative and want to know where you stand, or you have initiatives running and want to know whether you are above or below the median.

Not a fit for companies with a dedicated AI engineering team and productive use cases at large scale. For those a specialised maturity evaluation (e.g. Sentient Quarterly Maturity Review) is more useful than a self-test.

The 5-stages framework: where do you stand?

From Sentient engagements 2026, aligned with the established frameworks of Mittelstand Digital (KIRC), q.beyond, Fraunhofer IIS, Janea Systems and Sema4.ai. The five stages with definition and typical recognition markers.

Stage 1: exploring

Definition: first pilots, ChatGPT use in browser by individual employees, no strategy, no governance. Roughly 25 percent of DACH mid-market.

Recognition markers:

  • Employees use ChatGPT/Copilot in browser variant without IT or management systematically knowing
  • No documented AI strategy, no AI champion with mandate
  • No KPI measurement of AI impact, no pre-post comparison
  • No permissions architecture, no audit trail for AI actions
  • No AI literacy training of employees (critical from August 2026, see AI literacy mandate post)

Operating margin position: median industry level or below. No measurable AI impact.

Action recommendation: shadow IT inventory, identify AI champion, prioritize first use case (see our 90-day use case matrix).

Stage 2: embedded

Definition: 1 to 3 productive AI tools (typically Microsoft Copilot, ChatGPT Enterprise, a first specialised agent), first trainings done, basic governance documented. Roughly 30 percent of DACH mid-market.

Recognition markers:

  • 1 to 3 productive AI tools run with licences, employee training is done
  • First AI strategy documentation exists (5-to-15-page charter)
  • AI champion is named but has no dedicated budget
  • First KPIs are measured but not systematically (typically inline acceptance rate, lines of code — vanity metrics)
  • Permissions setup exists for the deployed tools but no central overview

Operating margin position: plus 5 to 15 percent above stage 1 because first productivity effects become visible.

Action recommendation: start skill library buildup, establish KPI framework (see our KPI post), plan second and third use case.

Stage 3: optimized

Definition: 5 to 15 productive AI workflows, skill library with versioning and ownership structure, KPI dashboard with DORA metrics plus size class normalisation. Roughly 25 percent of DACH mid-market.

Recognition markers:

  • 5 to 15 productive workflows with measurable impact (cycle time per size unit pre vs post at least 1.5x)
  • Skill library with CLAUDE.md plus custom commands, versioned via git
  • DORA-based KPI dashboard, three to five metrics, pre-post comparison
  • Permissions architecture with least privilege and audit trail
  • AI champion plus 1-2 engineering FTEs dedicated for AI setup
  • AI Act AI literacy obligation fulfilled (trainings documented, compliance setup in place)

Operating margin position: plus 15 to 25 percent above stage 1. The point at which AI impact becomes visible in the EBIT report.

Action recommendation: scale to differentiating stage — integrate AI into core business, not just support functions.

Stage 4: differentiating

Definition: AI is integrated into at least one core business workflow that represents differentiation against competition. Roughly 15 percent of DACH mid-market.

Recognition markers:

  • At least 1 AI workflow in a core business area (customer service as USP, own product recommendation engine, own engineering pipeline optimisation with output advantage)
  • 20 to 50 productive workflows total, distributed across multiple areas
  • Multi-provider setup with fallback and adapter layer
  • Drift detection pipeline with output sampling and anomaly escalation
  • Quarterly review of skill library with refactoring backlog
  • 2-5 dedicated AI engineering FTEs

Operating margin position: plus 30 to 40 percent above stage 1. Structural differentiation against competition becomes measurable.

Action recommendation: scale to transforming stage — integrate AI into strategy and business model innovation, not just workflow optimisation.

Stage 5: transforming

Definition: AI is part of strategic business model architecture, not just a tool. Roughly 5 percent of DACH mid-market — predominantly Hidden Champions in special niches that started early.

Recognition markers:

  • Business model components that would not work without AI (e.g. AI-driven product personalisation as USP, AI-based premium service tier)
  • AI strategy is part of business strategy, not IT strategy
  • 50-plus productive workflows, own skill library as IP asset
  • Continuous learning loops with customer feedback integration
  • Own AI research or co-creation with research institutes
  • AI-driven differentiation in recruiting (attracts top talent)

Operating margin position: plus 47 percent above stage 1 (Marketresearch 2026 median data). Structurally leading market position.

Action recommendation: maintain scaling, protect skill library as IP asset, cement knowledge lead via internal reskilling programmes.

60-minute sparring on your maturity evaluation →

The 15-minute self-test: 12 questions, three answer levels

Answer the 12 questions honestly. Per question 0/1/2 points. Result: 0-8 = stage 1, 9-12 = stage 2, 13-17 = stage 3, 18-21 = stage 4, 22-24 = stage 5.

Block A: data and tech (4 questions)

  1. Which AI tools are officially rolled out at our company? 0 = none, 1 = one to three (e.g. Copilot), 2 = five-plus with skill library
  2. We measure AI impact via DORA metrics or cycle time per size unit. 0 = no, 1 = only inline acceptance / lines of code, 2 = yes with pre-post comparison
  3. We have a versioned skill library with ownership structure. 0 = no, 1 = informally in markdown files, 2 = yes, in git with PR review
  4. We have multi-provider setup or clear fallback plan. 0 = no, 1 = not documented, 2 = yes

Block B: organisation and talent (4 questions)

  1. We have a named AI champion with mandate and budget. 0 = no, 1 = named but no budget, 2 = yes with both
  2. Employees with AI contact are trained per AI Act (basic 2-4h, in-depth for responsible roles). 0 = no, 1 = partially, 2 = yes, documented
  3. We have at least 1 dedicated AI engineer with 12-plus-month availability. 0 = no, 1 = informally someone "who does AI too," 2 = yes
  4. Management makes strategic mandate decisions for AI (not just IT). 0 = no, 1 = ad-hoc, 2 = yes with quarterly review

Block C: impact and strategy (4 questions)

  1. We have at least 1 productive workflow with measurable 90-day impact (pre-post comparison). 0 = no, 1 = in pilot, 2 = yes, multiple
  2. AI is integrated into at least 1 core business workflow (not just support function). 0 = no, 1 = in plan, 2 = yes productive
  3. We have a documented AI strategy with 12-month roadmap. 0 = no, 1 = informal, 2 = yes, management-approved
  4. We use external implementation partners systematically (not ad-hoc). 0 = no, 1 = on demand, 2 = strategic partner with handover clause

Evaluation:

  • 0-8 points: stage 1 exploring — immediately: shadow IT inventory, name AI champion
  • 9-12 points: stage 2 embedded — immediately: establish KPI framework, skill library buildup
  • 13-17 points: stage 3 optimized — immediately: scale to core business use case
  • 18-21 points: stage 4 differentiating — immediately: strategy integration, business model innovation
  • 22-24 points: stage 5 transforming — immediately: IP protection and talent magnet strategy

The typical three self-deceptions in the maturity check

Deception 1: "we use ChatGPT, so stage 2." Reality: ChatGPT use without tool licences, without training, without KPI is stage 1 (exploring). Stage 2 requires official tool licences, documented training, basic governance. If you cannot answer "who has which permissions" in 5 minutes, you are in stage 1.

Deception 2: "we measure inline acceptance, so stage 3." Reality: inline acceptance, lines of code, story points are vanity metrics that do not correlate with cycle time (see METR study 2025/26). Stage 3 requires DORA metrics (Lead Time, Deployment Frequency, MTTR, Change Failure Rate) or cycle time per size unit, with pre-post comparison. More in the KPI framework post.

Deception 3: "we have one workflow productive, so stage 4." Reality: stage 4 requires core business integration, not support function optimisation. If your AI workflow is incoming invoice or mail triage, you are in stage 3 (optimized). Stage 4 is customer service as USP, own product recommendation engine, own engineering pipeline optimisation with competitive advantage.

What comes after the self-test: 3 stage recommendations

If you are in stage 1-2: focus on a first productive use case in 90 days. External accompaniment very likely needed (1 in 10 makes it without external partner according to Sentient engagement data 2026). Budget 30,000 to 80,000 EUR for pilot, plus 90,000 to 200,000 EUR for scaling. More in the 90-day use case matrix.

If you are in stage 3: transition from optimisation to differentiation. Focus on 1 to 2 core business use cases with USP potential. Treat skill library as IP asset, not as tool collection. External accompaniment as sparring (1-2 days per quarter), not as implementer. More in our make/buy/partner post.

If you are in stage 4-5: maintain scaling, protect skill library as IP asset (versioning, backup, ownership structure), build talent magnet strategy (internal AI champion programme, reskilling academy for employees, public speaking by CEO/CTO on own AI successes). Talent attraction becomes a strategic advantage from stage 4.

AI literacy mandate from 2.8.2026: what executives must complete NOW →

Frequently asked questions

How often should we do the maturity check? Quarterly as self-check (15 minutes), annually as workshop depth (3 hours with AI champion plus management plus IT lead). More frequency does not pay off because stage shifts typically take at least 6 months.

What if our stages differ across areas? Very common. Engineering is often 1-2 stages ahead of operations or sales. Evaluate per functional area, then state median stage as org stage. Investment priority in areas with lowest stage and highest business contribution.

What does an external maturity evaluation cost? Mittelstand Digital Centres offer the AI Readiness Check (KIRC) free of charge, that is the entry. Deeper evaluations with strategy recommendation typically cost 8,000 to 25,000 EUR for 3-4 weeks engagement. Sentient offers 60-minute sparring free of charge, that does not cover depth but enough for first hypotheses.

We are in stage 1 but want fast jump to stage 4. Possible? In 12 months from stage 1 to stage 4: in our 2026 engagements we have seen this in 1 of 12 cases, and only with management full mandate plus 200,000-plus-EUR budget plus external senior partner. Realistic is stage 1 → stage 2 in 6 months, stage 2 → stage 3 in another 12 months, stage 3 → stage 4 in another 18 months.

What if we disagree on the stage in the management circle? Then the stage is typically 1 level lower than the optimistic view. The most common misjudgement is "we believe stage 3 but are stage 2." Honest test: write down three concrete proofs per stage level — if you cannot name three documented workflows with pre-post KPI for stage 3, you are in stage 2.

AI leaders vs laggards: the 47% margin gap →

Sources


About the author

Sebastian Lang is co-founder of Sentient Dynamics and leads the Agentic University programme. Before Sentient he was responsible for AI workforce programmes at SAP's Strategy Practice with 15+ years of engineering leadership experience. Sentient Dynamics works on a success-based compensation model and is deployed across the SHD and Bregal portfolios.

Subscribe to the newsletter | Sebastian on LinkedIn

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Keep reading

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.