Skip to main content

All articles

EU AI Act August 2026: The 90-Day Compliance Plan That Holds Even If the Omnibus Slips

Trilogue on 28 April ended without agreement. Without the Omnibus, full AI Act high-risk obligations apply on 2 August 2026. 5-phase 90-day plan for CTOs and engineering leads.

Sebastian LangMay 1, 202610 min read

Key numbers at a glance

  • 2 August 2026 is the original deadline for AI Act high-risk obligations. Binding until proven otherwise.
  • Digital Omnibus on AI: EU Commission proposal from 19 November 2025 to defer the high-risk deadline to 2 December 2027.
  • Trilogue on 28 April 2026 between Parliament, Council and Commission ended without agreement. As of 1 May 2026 no second round scheduled.
  • 35m EUR or 7 percent of global group turnover: maximum fine for prohibited AI practices (Art. 99 AI Act).
  • 15m EUR or 3 percent maximum fine for high-risk violations.
  • 53 percent of AI-using DACH companies fail according to Bitkom 2026 due to missing team skills, not technology.

If you are a CTO or Head of Engineering at a DACH mid-market company in 2026 with coding agents in production, you are sitting on a compliance bet right now. One camp is betting the Digital Omnibus will pass and they have until 2 December 2027 to comply. The other camp is planning for 2 August 2026 as the binding deadline. Only one of these camps will not find themselves in front of an external auditor in Q3 2026 because the Omnibus did not pass after all.

At Sentient Dynamics we see a clear pattern in our DACH mid-market engagements in 2026: engineering teams that start the 90-day plan now sail through August 2026 without compliance pressure. Teams that wait for the Omnibus run the risk of flipping into emergency-sprint mode in late July 2026, with neither audit trail nor high-risk classification cleanly documented. This post delivers the 90-day plan that holds regardless of the Omnibus outcome: in 5 phases, with clear weekly outputs, executable from day 1.

Where the Digital Omnibus stands today

The European Commission published the Digital Omnibus on AI on 19 November 2025 with the proposal to defer the high-risk deadline from 2 August 2026 to 2 December 2027. The trilogue between Parliament, Council and Commission ended on 28 April 2026 without agreement. As of 1 May 2026 no second round is scheduled.

Practically that means: if the Omnibus is not formally adopted before 2 August 2026, the original AI Act high-risk obligations apply on that date as written. The national market surveillance authorities in the 27 member states and the European AI Office are then enforcement-ready from day 1.

Three scenarios are on the table in 2026:

  1. Omnibus passes: deadline shifts to 2 December 2027. Engineering teams gain 16 months. Probability currently unclear after the failed trilogue.
  2. Omnibus does not pass: original deadline 2 August 2026 holds. High-risk obligations enforceable from day 1, fine exposure real.
  3. Omnibus passes partially: specific high-risk areas get exemptions, the rest takes effect on the deadline. A complexity trap for compliance teams because interpretation becomes industry- and domain-specific.

Our recommendation: plan for scenario 2. If scenario 1 or 3 materialises, you have 16 months of head start. If scenario 2 hits and you have not planned, then in July 2026 you run a compliance emergency sprint that produces flawed audit trails and panicked architecture decisions.

Who is affected: high-risk classification for coding agents

The most important question for engineering teams is: does our coding-agent setup fall under high-risk? The answer in 2026 is more nuanced than most teams assume.

Clearly high-risk under Annex III: coding agents in the Annex III domains employment, credit scoring, education, law enforcement, critical infrastructure. If your coding agent writes code for an HR selection system, a credit scoring model or a critical BSI-Grundschutz pipeline, then the agent itself is not high-risk, but the end product is, and that pulls audit-trail and documentation obligations back onto the development process.

Ambiguous zone: general-purpose coding agents without a clear industry binding. The European Commission published the classification guidelines in February 2026 that narrowed the interpretation room. Practical rule of thumb from our engagements: if the agent runs in a repo that later becomes an Annex III product, the entire development process counts as a high-risk precursor.

Clearly not high-risk: greenfield hobby projects, internal tooling scripts without external impact, demo code for training. Here the general transparency obligations apply (e.g. AI labelling of output to teammates), but no full high-risk stack.

Important to grasp: agents intended for multiple purposes are by default high-risk unless the provider takes sufficient precautions. For DACH engineering teams that means: better one high-risk classification too many than too few, because the burden of proof for multi-purpose agents lies with the user.

The 90-day plan in 5 phases

This is the plan we deploy in DACH engagements in 2026 when engineering teams come to us without a formal compliance setup. The phases are cut to run independently in case stakeholder or budget approvals slip in. The output per phase is always a document plus a runnable component, never just a concept.

Phase 1, day 1 to 14: inventory. Which coding agents are in use today? Cursor, Copilot, Claude Code, Codex, in-house? Who has access, with which permissions? In which repos? Output: asset inventory with user, repo, tool and permission matrix. Tools for capture: centralised IDP audit log (Okta, Entra), GitHub org audit API, Anthropic admin console.

Phase 2, day 15 to 30: classification by high-risk criteria. Per agent and per repo: does the agent run in an Annex III context or not? Who is provider, who is deployer in the AI Act sense? Output: high-risk classification matrix with reasoning per entry. For multi-purpose agents conservative high-risk default.

Phase 3, day 31 to 60: audit-logging stack. Implementation of the end-to-end audit trail: user ID, tool call, input hash, output diff, pull-request link, timestamp. Output: production audit trail with retention policy, exportable as JSON or CSV. With Claude Code Enterprise and Copilot Business out of the box, with Cursor a custom setup with wrapper layer. Anyone running Cursor in production without a wrapper has the heaviest lift here.

Phase 4, day 61 to 75: human-in-the-loop points. Definition of the points in the workflow where a human must be able to intervene, correct or stop. Code reviews with mandatory approval per PR are the minimum; granular review gates for high-risk components (auth, payment, data migration) are best practice. Output: documented review architecture plus kill-switch test that fires under 5 minutes.

Phase 5, day 76 to 90: vendor questionnaire plus audit readiness. Formal questionnaire to all AI vendors (Anthropic, GitHub, Cursor, OpenAI), based on the five security questions. Plus internal compliance dry run: a dummy audit in which an external auditor or another Sentient person tries to walk through all high-risk obligations in 90 minutes. Output: audit readiness protocol with gap list and fix prioritisation.

Anyone who has cleanly walked through all five phases in 90 days is audit-ready on 2 August 2026. Anyone who starts phase 1 on 1 August 2026 has 14 days of emergency sprint and documents in panic. That exact situation is the one that drives six-figure remediation costs and external compliance consultants billing in calendar-week rates in Q3 2026.

Three common mistakes we see in DACH engagements 2026

Mistake 1: We are waiting for the Omnibus. Classic mistake in late-adopter teams. Reasoning: "the deferral is certain, we save the effort." Reality: trilogue failed on 28 April 2026, national market surveillance authorities prepare for the original deadline, and compliance sprints in late July 2026 are no longer enforceable because engineering capacity also has to deliver Q3 roadmap promises.

Mistake 2: We let the compliance department build the audit trail. When the compliance department builds the audit trail independently of engineering, a parallel system emerges that has nothing to do with actual tool usage. The result: two truths, one compliance theatre. Audit trail must be built out of the engineering stack, with engineering ownership and compliance sign-off, not the other way around.

Mistake 3: We classify conservatively and end up blocked. The other extreme: everything becomes high-risk, every internal script needs an audit trail, the pull-request flow congests. Then compliance becomes a brake instead of an enabler. The middle line: classify in a differentiated way, but decide conservatively in the multi-purpose zone. Annex-III-distant cases cleanly documented as not high-risk, with reasoning in the classification matrix.

In an engagement with an industrial supplier in Q1 2026, an engineering team ran phases 1 to 5 to completion in a 90-day sprint and presented an audit-readiness protocol to the supervisory board on day 91. The external compliance consultant who reviewed it added one fix in phase 4 (granular review gates), nothing more. Total effort: an engineering lead half-time over 90 days, plus a Sentient coach for 14 workshop days. Investment under 80,000 EUR for a compliance position the supervisory board could verifiably stand behind.

Pre-2-August checklist

Before 2 August 2026 these five points should be documented in writing in the engineering team. They are our minimum criterion for audit readiness, derived from 12 months of engagement practice and the European Commission classification guideline of February 2026:

  1. Asset inventory with all coding agents in production use, user permissions, repo mapping.
  2. High-risk classification matrix with reasoning per entry, conservative in the multi-purpose zone.
  3. Audit trail end to end, exportable, with retention policy and role-based access rights.
  4. Human-in-the-loop points documented, kill switch tested under 5 minutes.
  5. Vendor questionnaire answers collected from all AI providers, with DPA and data residency confirmation.

If you have more than two of these points sitting at "in progress" on 2 August 2026, your setup is not audit-ready. There are mature tools and templates for each of the five points in 2026. The question is not if, but when.

Request a 60-minute AI Act readiness workshop for your engineering team →

Frequently asked questions

Should we really plan for 2 August 2026 when the Omnibus might pass? Yes. The probability of a deferral after the failed trilogue is unclear as of May 2026, the effort delta between "planned for August 2026" and "planned for December 2027" is smaller than the effort delta between "planned" and "emergency sprint." Plan for the earlier date, benefit from any deferral.

Which tools deliver an audit trail out of the box, AI Act compliant? Claude Code Enterprise and GitHub Copilot Business both with JSON export, user ID, tool call, input hash. Cursor Business needs custom setup or a wrapper layer. In-house developments need their own logging layer with the four mandatory fields.

Do we have to classify all coding agents as high-risk? No. Only where the output lands in an Annex III context (employment, credit scoring, education, law enforcement, critical infrastructure). General-purpose agents in Annex-III-distant contexts need the general transparency obligations but no full high-risk stack.

How big is the fine exposure for DACH mid-market companies in concrete terms? For high-risk violations up to 15m EUR or 3 percent of global group turnover, whichever is higher. For prohibited practices up to 35m EUR or 7 percent. Art. 99(6) caps SMEs and start-ups at the lower of the two figures, so a 50m EUR company classified as an SME would face up to 1.5m EUR for a high-risk violation; a non-SME faces up to 15m EUR. Plus reputational damage and audit follow-up costs.

Is one compliance consultant enough to cover the 90 days? No, because the audit trail has to come out of engineering. Setup pattern we recommend: compliance lead and engineering lead work in parallel, with Sentient coach support in the phase-3 and phase-4 weeks. Compliance alone produces compliance theatre, engineering alone produces unreviewed audit logs.

What happens after 2 August 2026? Market surveillance authorities can immediately schedule compliance audits. Sample-based at first, industry-specific increasingly. BaFin, BSI and BfDI built their audit capacity in 2026. Anyone without an audit trail on day 1 does not directly risk the maximum fine, but the start of an audit correspondence that ties up resources and escalates into fine demands.

What role does coding-agent security play for AI Act compliance? The five security questions we documented in our coding-agent security post cover the data residency, permissions and kill-switch topics that are simultaneously AI Act obligations. Anyone who has done the security audit has 60 percent of AI Act compliance already in the bag.

Does your tool choice tie into compliance? Read our Cursor vs Copilot vs Claude Code comparison →

Sources


About the author

Sebastian Lang is co-founder of Sentient Dynamics and leads the Agentic University programme. Before Sentient he was responsible for AI workforce programmes at SAP's Strategy Practice, with 15+ years of engineering leadership experience. Sentient Dynamics works on a success-based compensation model and is deployed across the SHD and Bregal portfolios.

Subscribe to the newsletter | Sebastian on LinkedIn

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Keep reading

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.