Skip to main content

All articles

Who Is Liable When the AI Agent Hallucinates? The 2026 Liability Framework for DACH Mittelstand

AILD withdrawn, PLD applies from 09.12.2026, AI Act Art. 26 from 02.08.2026. Three liability layers, five contract clauses, five insurance levers.

Sebastian LangMay 6, 20268 min read

You run an AI agent in production. It answers customer queries, drafts quotes, issues discounts. One day it invents a return policy that never existed. The customer pulls out of the contract, citing the agent's statement, and your general counsel asks: who pays for this?

The honest answer in 2026: you do.

Key numbers at a glance

  • AILD withdrawn. The European Commission listed the AI Liability Directive proposal for withdrawal in its 2025 Work Programme on 11.02.2025. Formal publication in the Official Journal happened on 06.10.2025 after the Commission meeting on 16.07.2025. Reason: no consensus among Member States (IAPP, Bird & Bird).
  • PLD from 09.12.2026. The Product Liability Directive (EU) 2024/2853 has been in force since 08.12.2024, transposition deadline 09.12.2026, applies to products placed on the market from that date. Software and AI systems are explicitly covered (European Commission).
  • Moffatt v. Air Canada. Civil Resolution Tribunal British Columbia, 14.02.2024: Air Canada liable for misrepresentation by its chatbot. Damages CAN$ 812.02, but precedent value is the real story (American Bar Association).
  • Allianz Risk Barometer 2026. Cyber risks plus AI risks rank as the largest business risks globally (Allianz).

Why this post matters now

Until February 2025 the German Mittelstand had a comfortable expectation: the AI Liability Directive would clarify, EU-wide, who was on the hook for damage caused by AI systems. Burden-of-proof relief, harmonised definitions, a framework you could grow into. That expectation is dead.

What remains is a gap. As a deployer of an AI agent in Germany you are working in 2026 with three layers in parallel: classical civil law (BGB), the reformed product liability regime (PLD from late 2026), and the deployer obligations under the AI Act (from 02.08.2026, see AI Act Art. 26 Deployer Obligations). None of these layers was written for AI agents. All three apply to you anyway.

Three liability layers for DACH Mittelstand

Three-layer liability pyramid for AI agents in DACH

Layer 1: Civil law (applies now)

Section 823 paragraph 1 BGB covers tort liability: violation of life, body, health, freedom, property or another absolute right. Section 280 paragraph 1 BGB covers contractual breach. Both are standard tools of German civil courts and fully applicable to damage caused by an AI agent acting toward a customer, supplier or employee.

What 2026 lacks: high-court guidance. There is currently no BGH or OLG decision that addresses whether an AI agent's hallucination is attributable as fault to the operator, whether a duty-of-care standard analogous to product liability should apply, or how the burden of proof is allocated. You are working with general clauses whose application to AI agents is still being shaped in court.

Practical consequence: document what your agent does, how it decides, and what safety measures you as operator put in place. When a court later asks "did the operator apply the diligence required in commerce", you want a file you can present.

Layer 2: Product liability (PLD from 09.12.2026)

The Product Liability Directive (EU) 2024/2853 is the largest change. Software and AI systems are explicitly defined as "products". Manufacturers (and under specific conditions deployers) are strictly liable for defects at the moment the product is placed on the market and for defects that emerge later through updates, upgrades or machine-learning behaviour.

Important boundary: the PLD applies to products placed on the market from 09.12.2026. Germany has a draft national implementation bill (Reed Smith analyses the German bill in this article). If you roll out a new AI agent in 2026, it falls under the new PLD logic once the deadline kicks in.

What that means in practice: the question "who is the manufacturer" gets complex. Are you, a Mittelstand company configuring a GPT-4o-based agent and shipping it to customers, the manufacturer in the PLD sense? Are you the importer of the foundation model? Are you only the deployer? The answer determines whether you face strict liability or whether you can fall back on classical BGB.

Layer 3: AI Act deployer obligations (from 02.08.2026)

AI Act Art. 26 imposes concrete obligations on you as deployer of a high-risk system: appropriate technical and organisational measures, input-data quality assurance, logging of system activity, human oversight, information of affected persons. A breach of Art. 26 is not a direct damages claim, but it is a strong indicator of fault under Section 823 BGB. If you breach the logging obligation under Art. 26 paragraph 6 and damage occurs, the injured party has a substantial argumentative advantage in civil court.

Cross-link: GPAI obligations and how they interact with deployer responsibility, covered here: AI Act GPAI Obligations August 2026.

Teaching case: Moffatt v. Air Canada

In 2022 Jake Moffatt booked bereavement fares with Air Canada based on information given to him by the airline's website chatbot. The information was wrong. Air Canada refused the refund and argued before the British Columbia Civil Resolution Tribunal that the chatbot was "a separate legal entity that is responsible for its own actions". The tribunal rejected the argument and held that Air Canada owed Moffatt a duty of care and was liable for the chatbot's misrepresentation under the negligent-misrepresentation doctrine. Damages: CAN$ 812.02 (McCarthy Tetrault).

Transferable to Germany? With caution. Common law (Canada, UK, US) and civil law (Germany) work differently. The German equivalent would not be a negligent-misrepresentation doctrine but Section 311 paragraph 2 BGB (pre-contractual obligation) plus Section 280 paragraph 1 BGB. The argument "the chatbot is a separate entity" will fare just as poorly in a German civil court as it did in Vancouver. When your agent makes a statement to a customer, that statement is yours. You operate the system, you deployed it, you are accountable for the outputs.

What Moffatt teaches: courts have no patience for architecture arguments. The question is not "who hallucinated", but "who placed the system in commerce and put it in front of customers".

Insurance renewal 2026: five clauses to negotiate now

Insurers are in an interesting position in 2026. Allianz Risk Barometer ranks cyber plus AI as the largest business risk, and Munich Re writes in its Cyber Insurance Trends 2026 that agentic AI affects claim frequency more than claim severity. What 2026 still lacks are established market-standard clauses for agentic-AI loss events. The major cyber insurers are still building their risk profiles.

That is your opening. At the next renewal conversation (cyber, professional indemnity, IT liability) bring these five points to the table actively:

  1. Clear definition of "AI system". Do not let the insurer define it. Reference Art. 3 No. 1 AI Act in the policy. Otherwise you sit in a definition trap when a claim arises.
  2. Coverage for autonomous agent actions. Ask explicitly whether damage caused by an agent without a human in the loop is covered. If the answer is "we examine case by case", that is a coverage hole.
  3. Logging requirements as policy condition, not as exclusion. Insurers will increasingly prescribe logging standards (which you need anyway under AI Act Art. 26). Make sure your existing logs satisfy the requirements.
  4. Cross-coverage between AI Act fines and damages. Fines are typically not insurable. But the civil follow-on costs of an AI Act breach (damages to injured parties) should be covered. Separate the two cleanly.
  5. Foundation-model-provider-defect clause. If your OpenAI or Anthropic provider has a model defect that causes damage at your end, you are in a subtle recourse situation. Ask how your policy handles this scenario.

Five contract clauses for your provider agreement

Equally important as insurance: your contract with the AI service provider (or with your implementation partner). Here are five clauses that belong in every 2026 contract.

ClauseObligationRisk without clauseSentient recommendation
Output warranty disclaimerProvider clarifies that outputs are probabilistic and not legally bindingYou get pulled into warranty liability based on provider marketing claims ("99% accurate")Disclaimer in the contract, but: provider commits to best-practice mitigation (guardrails, eval pipeline)
Logging obligationProvider delivers full request/response logs, retained at least 6 monthsYou cannot satisfy AI Act Art. 26 paragraph 6, you cannot prove anything when a claim arisesMinimum 12 months, exportable to your SIEM, retention clause aligned with GDPR (see GDPR and Agentic AI)
IndemnificationProvider indemnifies you for third-party claims from IP infringement in model training or outputsYou are liable for training-data issues you have never seenIndemnification cap at minimum 12 months contract value, no cap for personal injury and wilful breach
SLA hallucination rateProvider measures and reports hallucination rate on a defined eval set monthlyYou fly blind, you cannot detect deterioration, you have no argument when damage occursEval set defined by customer, threshold contractually fixed, right-to-terminate on repeated breach
Audit rightYou can access security tests, red-team reports, prompt-injection defence (see Prompt Injection Defence)You cannot fulfil your own duty of care, you cannot exonerate yourselfAt least annually, on-site or remote, reports flow to your InfoSec

Bottom line: what you do this month

  1. Inventory. List every AI agent in your company that produces outputs going to customers, suppliers or employees. Per agent: use case, data flow, human-in-the-loop yes/no, provider, contract status. If you cannot do this in 5 working days, you have an inventory problem, not a liability problem.
  2. Contract review. Take the five clauses above and run a gap check against your existing AI service contracts. Renegotiate at every renewal. Mandatory for new contracts from today.
  3. Insurance meeting. Call your cyber insurer before they call you. Bring the five clauses from the insurance section. Insurer risk engineers are still in learning mode in 2026, less so in 12 months.

If the PLD deadline catches you off guard in November 2026, it will not be the EU's fault. It will be May 2026's.

Companion post: AI Audit Readiness in 90 Days, the operational roadmap for everything that is legal here.

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.