Skip to main content

All articles

GDPR and Agentic AI in Production: What German DPOs Audit in 2026 — and What They Reject

AI Act and GDPR are complementary — for Agentic AI in production this means two duty catalogues at the same time. The critical pitfalls, with article references instead of gut feeling.

Sebastian LangMay 5, 20267 min read

Key numbers at a glance

  • The AI Act and GDPR are complementary, not substitutes (see AI Act Recital 10 and Art. 2(7)) — anyone running Agentic AI in production must satisfy both duty catalogues at the same time, not pick one.
  • GDPR Art. 35 requires a Data Protection Impact Assessment (DPIA) for "likely high risk" — for generative AI processing personal data this threshold is almost always met, especially when automated individual decisions (Art. 22) are involved.
  • AI Act Art. 27 additionally requires a Fundamental Rights Impact Assessment (FRIA) for certain deployers of high-risk AI systems (public bodies, private bodies providing public services, plus banks and insurers for credit-scoring and risk-assessment use cases — Annex III(5)(b)/(c)). DPIA and FRIA overlap but do not replace each other — where both apply, both are required.
  • AI Act Art. 26 sets out deployer obligations for high-risk AI systems. The deployer is not the provider (e.g. OpenAI), but you as a German Mittelstand company running the agent in production.

Why this post is relevant now

Across the German Mittelstand we see a recurring 2026 pattern: engineering teams build an Agentic AI use case, the pilot runs, the first production rollout goes live — and then the external Data Protection Officer (DPO) calls. Three typical reactions:

  1. "We just use the OpenAI API, that's their problem" — wrong. You are the controller under GDPR (Art. 4 No. 7) when you decide on purposes and means of processing. Engaging a processor (Art. 28) does not change that.

  2. "We have a DPA with OpenAI/Anthropic/Microsoft, that's enough" — the data processing agreement is mandatory but only one component. Lawful basis (Art. 6), data minimisation (Art. 5(1)(c)), the right to access and erasure (Art. 15-17) remain your obligations.

  3. "The AI Act resolves the GDPR questions" — also wrong. AI Act Art. 2(7) explicitly states: GDPR remains unaffected. Anyone running Agentic AI in production has two duty catalogues to fulfil, not one.

This post translates that into a concrete checklist with article references — what a German DPO will actually ask for in 2026.

The seven critical GDPR articles for Agentic AI in production

1. Art. 6 — Lawful basis

What it requires: Every processing of personal data needs a lawful basis. For Agentic AI typically Art. 6(1)(b) (contract performance), (c) (legal obligation), (f) (legitimate interest). The latter requires a documented balancing test.

Where it goes wrong: "We train the model with customer data because that's necessary" — no balancing test documented, no opt-out mechanism, no notice in the privacy policy. That fails the first audit.

What to do: One written lawful basis per use case. For Art. 6(1)(f), additionally the balancing test with criteria (purpose, necessity, protected interests of data subjects).

2. Art. 22 — Automated individual decisions

What it requires: When the agent makes a decision with legal effect or significant impact on the person (e.g. credit check, recruitment, tariff classification), Art. 22 applies — prohibition with exceptions. Human review of the final decision is mandatory.

Where it goes wrong: The agent automates the first 95 percent of a recruitment process, the human just rubber-stamps. That is not "human review" within the meaning of Art. 22(3).

What to do: Define where a substantive human decision happens. Build the workflow so the human actually has the ability to overturn the decision — and document that in the audit log.

3. Art. 25 — Privacy by Design and by Default

What it requires: Data protection must be built into the system from the design phase, not bolted on afterwards. Default settings must be privacy-friendly.

Where it goes wrong: The agent logs the full prompt text including personal data into CloudWatch by default, because "we might need it for debugging later". Privacy by Default violated.

What to do: Structure logs — prompt hashes instead of plain text, or PII redaction in the logging pipeline. Define and enforce retention durations automatically.

4. Art. 28 — Processor agreements

What it requires: A DPA is mandatory for every external LLM provider (OpenAI, Anthropic, Microsoft, Google). For sub-processors (e.g. Anthropic via AWS Bedrock) a sub-processor list must be available.

Where it goes wrong: DPA with OpenAI is signed, but no one has reviewed the sub-processor list. If OpenAI runs on Microsoft Azure infrastructure, that is a sub-processor — you need to know which ones.

What to do: Per LLM provider document: DPA in place (date), sub-processor list reviewed (date), processing purpose documented. Track in the records of processing activities (Art. 30).

5. Art. 32 — Security of processing

What it requires: Appropriate technical and organisational measures to mitigate risks. For Agentic AI specifically: authentication, encryption, access controls, backup, recovery, regular testing.

Where it goes wrong: The agent API key sits in a .env file in the repo, has not been rotated for 18 months, has admin rights to all endpoints. One security incident — and the fine relevance increases significantly.

What to do: Secrets management (e.g. Vault, AWS Secrets Manager), key rotation policy, least-privilege permissions per use case, documented incident response.

6. Art. 35 — Data Protection Impact Assessment (DPIA)

What it requires: A DPIA is mandatory when processing is likely to result in a high risk. For generative AI with personal data the threshold is practically always met — the DSK list of processing activities requiring a DPIA explicitly includes "use of artificial intelligence".

Where it goes wrong: "We don't need one, it's just sales" — DPIA is not use-case-dependent but risk-dependent. Sales AI with personal lead data = DPIA.

What to do: Run a DPIA per AI use case — typically 1-3 days of effort with DPO support. Output: documented risks, measures, residual risk assessment.

7. Art. 44 ff — Third-country transfers

What it requires: Transfers to third countries (US, UK, etc.) need a transfer mechanism. For the US since July 2023, the EU-US Data Privacy Framework (DPF) — provided the provider is certified.

Where it goes wrong: OpenAI has been DPF-certified since February 2024, Anthropic too — but older open-source providers or specialist LLMs not necessarily. Failing to check means: no valid transfer mechanism, breach of Art. 44.

What to do: Per LLM provider check the DPF status on the Data Privacy Framework List (as of May 2026). For non-certified providers: Standard Contractual Clauses (SCC) plus Transfer Impact Assessment.

The AI Act layer on top

Art. 26 — Deployer obligations for high-risk AI

If your agent falls into one of the Annex III areas (employment/HR, education, critical infrastructure, law enforcement etc.), the additional deployer obligations under Art. 26 apply: follow provider instructions, ensure human oversight, monitor input data, retain logs (typically six months, Art. 26(6)).

Art. 27 — Fundamental Rights Impact Assessment (FRIA)

For certain deployers of high-risk AI (public bodies and private bodies providing public services, plus banks and insurance for credit-scoring and risk-assessment), Art. 27 requires a FRIA — analogous to the DPIA, but focused on fundamental rights (not just data protection).

Important: FRIA does not replace the DPIA. Anyone meeting both conditions does both. In practice parts can be assessed together — but the output documents are kept separate.

A concrete audit-trail proposal

What a DPO in 2026 typically wants to see in an audit:

  1. Records of processing activities (Art. 30) with the AI use case as its own entry — purposes, data categories, recipients, third-country transfers, retention periods, technical and organisational measures.
  2. DPA folder per LLM provider — DPA contract, sub-processor list, date of last review.
  3. DPIA document for the use case — risk analysis, measures, residual risk assessment. If applicable, additionally a FRIA.
  4. Logging concept with retention durations — what is logged (hash vs. plaintext), where (which system), how long, who has access.
  5. Incident-response plan for AI-specific incidents — hallucination with personal reference, data leak via prompt injection, faulty automated decision.
  6. Training records — AI literacy under AI Act Art. 4 (see our AI literacy mandate post) plus GDPR training of the engineers involved.

Anyone with these six building blocks in place and reviewing them quarterly covers about 80 percent of typical audit questions.

Two common disputes

"Do we need to anonymise personal data before it goes to the LLM?"

Not necessarily, but often sensible. Anonymisation (irreversible) removes the data from GDPR scope entirely. Pseudonymisation (Art. 4 No. 5) stays under GDPR but reduces risk significantly. Which path fits depends on the use case — for sales often pseudonymisation with re-identification at the end, for analytics often full anonymisation.

"Is a DPA enough, or do we need Standard Contractual Clauses?"

A DPA is always mandatory. SCCs additionally become relevant when the provider is not DPF-certified — then SCC plus Transfer Impact Assessment is the standard route. DPF-certified US providers only need the DPA, no additional SCC construct.

Bottom line

GDPR and AI Act are not an either-or but a both-and. Anyone running Agentic AI in production should build the audit trail before the DPO calls — not after. The six building blocks above are achievable in 4-6 weeks per use case if engineering and data protection collaborate from day one.

Which of the seven GDPR articles is your biggest building site in 2026 — and have you already separated DPIA and FRIA, or are they still mixed in one document?

About the author

Sebastian Lang

Co-Founder · Business & Content Lead

Co-Founder von Sentient Dynamics. 15+ Jahre Business-Strategie (u.a. SAP), MBA. Schreibt über AI-Act-Compliance, ROI-Messung und wie Mittelstand-CTOs agentische KI tatsächlich einführen.

Once a month. Only substance.

No motivational fluff. No tool lists. Only what CTOs, COOs and MDs in DACH really need to know about AI adoption.