A situational report from project practice. Written for decision-makers in German-speaking mid-sized companies who want to deploy AI without inheriting the next Schrems debate on their desk two years from now – and relevant for anyone trying to understand how European AI compliance is actually being done in regulated industries.
🚀 The starting situation
In many mid-sized companies – especially in banking, insurance, healthcare and public administration across the German-speaking region – AI is still not being deployed, or only very hesitantly. Not because the technology isn't available. Not because the business case is unclear. But because nobody in-house can give a clear answer on what is actually allowed under data protection law.
The short answer: there is a lot more that works today than most people think. You just have to know where to look – and in what order to ask the questions. This article is a situational report. Which models and providers can actually be used in a GDPR-compliant way in 2026? Where are the pitfalls? And how do companies handle this in practice? No marketing claims, no panic tone, no oversimplified blanket statements.
A brief note on terminology for readers outside Europe: GDPR (General Data Protection Regulation, known in Germany as DSGVO) is the EU's omnibus data protection law, in force since 2018, with fines of up to €20 million or 4% of global turnover. It applies to any processing of personal data of EU residents, regardless of where the processor is located. The newer EU AI Act adds a parallel regime specifically for AI systems, with fines up to €35 million or 7% of global turnover. Both regimes apply at the same time – GDPR governs the data, the AI Act governs the system.
❓ What has changed legally in 2025 and 2026
If your last deep dive into this topic was in 2023 or 2024, you're working from an outdated picture. Three things have moved in the last 18 months that are relevant to every architectural decision.
First: the EU AI Act is in force, phase by phase. Since 2 February 2025, the prohibitions under Art. 5 and the AI literacy obligation under Art. 4 apply – this affects every company, not just high-risk users. Since 2 August 2025, the obligations for general-purpose AI models (Art. 53) apply, flanked by the GPAI Code of Practice published on 10 July 2025. Enforcement for the large model providers begins on 2 August 2026.
The Digital Omnibus on AI, proposed by the Commission on 19 November 2025, then pushed the high-risk deadlines back: Annex III to 2 December 2027, Annex I to 2 August 2028. Important caveat: the Omnibus is not yet final at the time of writing. Formally, 2 August 2026 remains the high-risk deadline. Betting on the delay is speculation – and speculation is not a sound strategy in regulated industries. The substantive requirements don't change through the Omnibus anyway, only the timelines do.
Second: German supervisory authorities have caught up. The Datenschutzkonferenz (DSK) – the joint body of all federal and state data protection authorities in Germany – has issued guidance in three waves: in May 2024 on the general selection and use of AI, in June 2025 on the technical and organisational measures along the seven protection goals, and in October 2025 for the first time specifically on retrieval-augmented-generation (RAG) architectures. These three documents have become the working basis for every data protection officer in the German-speaking region. If you don't know them, you're negotiating with your own DPO without a shared vocabulary.
The most important takeaways: even pseudonymised inputs to an LLM are, as a rule, considered personal data. A Data Protection Impact Assessment (DPIA) is almost always required for LLM deployment. RAG architectures offer structural advantages for data minimisation and data subject rights – but they don't solve the problem of an unlawfully trained base model if one is used.
Third: BaFin has taken a position. BaFin is Germany's Federal Financial Supervisory Authority, roughly comparable to the SEC combined with elements of the CFTC, FDIC and OCC. Its guidance on ICT risks in AI deployment from December 2025 amounts to a de facto reversal of the burden of proof for banks and insurers – formally not binding, but very much binding in supervisory practice. Anyone deviating must demonstrate an equivalent level of protection. That is something different from a classic circular, but hardly milder in effect. In parallel, the BSI C5 catalogue in its 2026 version was published – BSI being Germany's Federal Office for Information Security, and C5 being the country's most important cloud security criteria catalogue. The new version adds post-quantum cryptography, confidential computing and expanded supply chain risk management. For healthcare providers and financial institutions, a C5 Type 2 attestation has effectively become a market entry prerequisite.
The bottom line from this legal update: some deadlines are shifting. But the architectural decisions being made today will determine the next five years. And they need to hold up against supervisory practices that are tightening, not loosening.
📌 The CLOUD Act debate: from rhetorical weapon to nuanced picture
In almost every discussion about AI providers, at some point someone says: "But the servers are in Frankfurt." Or the reverse: "That's impossible because of the CLOUD Act." Both statements are too short. The legal situation is more complicated – and the arguments go both ways.
The critical position, which dominates in practice, runs like this: the US CLOUD Act of 2018 obliges US companies to hand over data to US authorities on request, regardless of server location. Microsoft, Amazon and Google are US companies, and their German subsidiaries are subject to this obligation as well. The European Court of Justice flagged exactly this type of constellation in the Schrems II ruling – technically on the basis of FISA 702, but the underlying concern is the same. The European Data Protection Board established back in 2019 that a pure CLOUD Act request does not, as a rule, constitute a legal basis under Art. 48 GDPR, because the CLOUD Act is not a mutual legal assistance treaty. And the EU Data Act, in Chapter VII, actively requires European cloud providers to defend against unlawful third-country access – creating a direct collision scenario with the CLOUD Act.
The more nuanced counter-position runs: the CLOUD Act was not addressed in the Schrems II ruling at all. The Wiesbaden administrative court – later overturned – and the government of North Rhine-Westphalia have argued that as long as data remains in the European Economic Area, there is no third-country transfer in the first place. The CLOUD Act gives the US the right to demand disclosure, but international law does not automatically override national law – in a conflict, conflict-of-laws principles and, where fundamental rights are concerned, the ordre public apply.
What this means for project practice: the CLOUD Act is not an automatic disqualifier, but it is a risk factor that must be documented. For companies outside heavily regulated industries, with a clean Transfer Impact Assessment and Standard Contractual Clauses in place, US cloud use is not ruled out. For banks, insurers, health insurance funds and public authorities, it is often a knockout criterion in practice – because risk tolerance is extremely low and supervisory practice is correspondingly strict. Anyone walking into a supervisory review with the argument "no third-country transfer under the NRW interpretation" will get a friendly nod followed by a pointed question about the concrete risk assessment.
That is the heart of the matter. The question isn't "CLOUD Act yes or no." The question is what level of protection is required and what documentation your own legal department and the relevant supervisor expect.
❓ What supervisory practice showed in 2024 and 2025
Legal departments take their cues not primarily from statute text but from what has held up at appellate level or in supervisory practice. Three developments are particularly relevant for the architectural decisions being made right now.
In January 2023, the Irish Data Protection Commission imposed a GDPR fine of €1.2 billion on Meta – for illegal data transfers to the US. The message to legal departments was unambiguous: transfer violations are not a theoretical threat, they cost billions.
In December 2024, the Italian data protection authority Garante imposed €15 million on OpenAI. The findings ranged from an unreported data breach through missing legal basis for training on personal data to violations of transparency obligations and missing age verification. On top of that, the authority ordered a six-month information campaign in Italian media – a legal novelty. Particularly relevant is the parallel EDPB Opinion 28/2024: an AI model trained on unlawfully collected data but effectively anonymised before deployment does not violate the GDPR. That opens a legal path forward – but with high requirements for demonstrating anonymisation.
Then there's the Microsoft 365 story, which is instructive because it shows just how much positions can move. The DSK had declared Microsoft 365 unfit for GDPR-compliant use back in 2022 – with a razor-thin 9:8 majority. After three years of Microsoft adjustments (EU Data Boundary, revised data processing agreements, Advanced Data Residency, Customer Lockbox) and intensive negotiations, the Hessian Commissioner for Data Protection and Freedom of Information (HBDI – one of the state-level data protection authorities) published a report in November 2025 with the headline conclusion: Microsoft 365 is in principle deployable in a GDPR-compliant way if certain conditions are met. This was the first authority-level course correction on the question in three years. For Copilot the situation remains more complex – integration with Bing Search creates a separate Microsoft responsibility outside of data processing, and a DPIA is practically always required.
Another signal: the Baden-Württemberg procurement chamber ruled in July 2022 that the use of US cloud providers in a specific procurement procedure was unlawful under Art. 44 ff. GDPR. The decision was later overturned by the Karlsruhe higher regional court on procedural grounds. But the substantive reasoning has continued to carry weight in discussions with public sector clients.
And as a contrast – the negative example: DeepSeek has been assessed by the State Commissioner for Data Protection of Lower Saxony and several other supervisory authorities as not GDPR-compliant. Storage of IP addresses, keystrokes and documents without transparency, potential Chinese authority access, missing data processing agreement, no adequacy decision for China. For regulated industries, this is currently unusable. A useful point of contrast when explaining internally why not all providers are created equal.
❓ The three classes of providers – and why the distinction matters
When searching the market for "GDPR-compliant AI providers," you find three classes that often get lumped together. They differ fundamentally.
🔹 Class 1 – Genuine EU providers
Headquartered in the EU, servers in the EU, company subject exclusively to EU law. No US third-country problem. No CLOUD Act. No Schrems debate. For regulated industries, this is the category that gets through the data protection officer and audit committee most easily.
Mistral AI from Paris is the European all-rounder in 2026. The company offers a complete family of models – Mistral Large 3, Mistral Small 4 (a sparse mixture-of-experts architecture with 119 billion parameters, of which only about 6 billion are active), Codestral, Voxtral, Magistral and Ministral – covering frontier chat, code, audio and edge use cases. Mistral Small 4 is available under Apache 2.0, so it can also be self-hosted. Training opt-out is the default on La Plateforme and on the Le Chat Pro/Teams products. Mistral is a signatory to the GPAI Code of Practice. The company is subject to French and EU law; the fact that Azure GPU infrastructure is used for some model training is irrelevant for the GDPR assessment of inference – what gets trained is model weights, not customer data.
Aleph Alpha from Heidelberg fundamentally restructured itself at the end of 2024. The company discontinued development of its own frontier language models and pivoted entirely to PhariaAI – an "operating system for generative AI" for enterprise and public sector. PhariaAI runs on-premises, in private cloud, or as a SaaS via STACKIT, the cloud arm of Germany's Schwarz Group (the owners of Lidl and Kaufland, in case that name doesn't ring a bell internationally). The T-Free architecture introduced in January 2025 shows measurable advantages on German-language administrative texts. Reference customers include public authorities, global chip manufacturers and automotive suppliers. The announced merger with Canadian provider Cohere, supported by both governments, is intended to close the technological gap to US frontier models. Aleph Alpha is an early signatory of the EU Code of Practice for general-purpose AI and positions itself explicitly on compliance quality rather than benchmark leadership. For public authorities, critical infrastructure operators, banks and insurers with their own data centres and full-stack integration budgets, this is currently the most coherent offering. For the typical 50-person mid-sized company, it's often too heavyweight.
Black Forest Labs from Freiburg is Europe's most valuable AI startup, with a valuation of USD 3.25 billion – and world-class in image generation. The FLUX model family is integrated in ComfyUI, Hugging Face, Replicate and, more recently, Mistral's Le Chat as an image generator. FLUX.2 [klein] 4B runs locally with about 13 GB of VRAM. One point to check carefully for regulated scenarios: Black Forest Labs also operates a San Francisco location. The pure EU jurisdiction argument doesn't work as cleanly here as it does for Mistral or Aleph Alpha.
DeepL from Cologne is the de facto European standard for translation and text editing, ISO 27001 and SOC 2 certified. Parts of the infrastructure have historically been run on Azure – worth clarifying contractually for particularly sensitive content.
Beyond these, there is LightOn from Paris (energy-efficient enterprise models), Neuroflash for content marketing, Mindverse as a general platform, and the deepset/Haystack framework from Berlin for RAG architectures. For specific use cases these are worth a look; for the foundation, the first four names usually suffice.
For self-hosting and operations, European cloud infrastructure rounds out the picture: STACKIT (Schwarz Group) with C5 and ISAE 3000/3402 attestations, data centres in Germany and Austria; IONOS Cloud; OVHcloud with the French state SecNumCloud standard and 43 data centres in 9 countries; Open Telekom Cloud from T-Systems on OpenStack; Scaleway from France with managed AI offerings (with one caveat: the management console uses some US-based services); and Hetzner as a cost-effective, solid base for self-hosting without enterprise services.
🔹 Class 2 – US models in EU data centres
This is the class most current discussions are actually about when someone says "of course we use it in a GDPR-compliant way." OpenAI via Azure Frankfurt, Anthropic Claude via AWS Bedrock Frankfurt, Gemini via Google Cloud Vertex AI in the EU region.
The widespread misconception: many treat this as equivalent to Class 1. It isn't. Data is physically processed in the EU, but the company providing the service contractually is a US company. The CLOUD Act problem persists. Microsoft's EU Data Boundary is technically clean, but leaves residual US staff access in support cases. Azure OpenAI has training opt-out set by default. Anthropic via Bedrock runs a 7-day abuse monitoring window not used for training – subject to documentation, but usually acceptable.
For companies outside heavily regulated industries, Class 2 is workable with a clean Transfer Impact Assessment, Standard Contractual Clauses and documented risk trade-off. For banks, insurers and public authorities, it gets difficult – not because it's legally impossible per se, but because Section 25a of the German Banking Act (KWG), DORA and the BaFin guidance in effect define a burden of proof that pure US offerings struggle to meet.
For readers less familiar with the European acronym soup: DORA is the EU Digital Operational Resilience Act, which took effect for financial entities in January 2025. It sets a uniform framework for ICT risk management, third-party oversight and incident reporting – and applies directly, without national implementation. Combined with Section 25a KWG's governance requirements, it creates a stack of obligations that any AI architecture in financial services needs to satisfy.
🔹 Class 3 – Self-hosted open-source models
The gold standard for companies that have their own infrastructure, know-how and sufficient volume. As of spring 2026, the following are practically usable: Mistral Small 4 (MoE, 119 billion parameters, Apache 2.0), GPT-OSS-120b, Gemma 4 in various sizes, Qwen 3.5 from 4B to 27B, Llama 3.3.
In practice, this enables setups that come as a noticeable surprise to the "but that only runs in the cloud" camp. A concrete example from practice that stands for the reality of many smaller offices: a 15-person tax advisory firm runs Qwen 3.5 9B on a Mac mini M4 Pro with 48 GB of unified memory for client document classification. Two days of setup with Ollama and n8n. No data leaves the building. No API costs. No GDPR discussion with the data protection officer.
One thing that gets overlooked: "open source" is not synonymous with "freely usable commercially for everything." FLUX.2 [klein] 9B is only available for commercial use via API; the 4B variant, by contrast, is under Apache 2.0. Llama licences have usage restrictions above a certain company size. Before any production deployment, a licence review belongs in the runbook.
📌 The five layers that need to line up
The question "is this AI provider GDPR-compliant" cannot be answered with a single sentence. It decomposes into five layers that need to be checked individually. That's why phrases like "Azure in Frankfurt is safe" don't hold up in serious reviews.
First layer: server location. Where is data physically processed during inference? The simplest question, and often the only one asked in superficial discussions. Necessary but not sufficient.
Second layer: provider headquarters. Which legal regime does the contracting company fall under? This is the critical point for CLOUD Act, FISA 702 and Executive Order 12333 analyses. A US company with a server in Frankfurt is not the same thing as an EU company with a server in Frankfurt. Jurisdiction determines who can enforce disclosure claims on what legal basis in the event of conflict.
Third layer: training policy. Are inputs used for model training? ChatGPT consumer: yes by default. API: no by default, but only with correctly configured account and documented data processing agreement. Mistral La Plateforme and Le Chat Pro: no by default. OpenRouter: configurable per request. This layer often gets forgotten because it's invisible until a data protection incident occurs.
Fourth layer: retention policy. How long is data retained? OpenAI API: 30 days for abuse monitoring. Anthropic: 7 days. OpenRouter in ZDR mode: none. Normally acceptable, but subject to documentation and, in certain scenarios (e.g. law firms with client data), a conscious decision.
Fifth layer: sub-processor chain. Many putative "EU providers" run OpenAI or other US services under the hood. The transfer happens anyway – just one layer deeper, and often without the customer seeing it. This is the layer experienced data protection officers check first: not the provider, but the sub-processors.
Only when all five layers are cleanly answered is the question actually answered. And this is exactly the kind of structured review that DPOs and auditors want to see. A one-dimensional "we have servers in Frankfurt" answer reliably triggers follow-up requests in regulated industries.
📌 Gateways and orchestration: access to many models without losing control
In practice, you usually need access to multiple models. One for German-language dialogue, another for code, a third for long contexts or specific reasoning tasks. A pure single-provider strategy is rarely practical – and it also creates vendor lock-in that can become problematic under DORA and Section 25a KWG.
This is where gateway solutions come in – abstraction layers that sit in front of multiple providers.
OpenRouter, based in Delaware, offers access to roughly 300 models through a single endpoint. Its compliance features are less well-known than they should be. Zero Data Retention can be activated per request via a zdr parameter or at the account level – in which case requests are routed only to endpoints that guarantee no storage. The training opt-out explicitly excludes all providers that train on inputs, with separate settings for paid and free models. The EU in-region routing via the eu.openrouter.ai base URL processes prompts and completions exclusively within the EU – this feature must be requested for enterprise accounts. A provider whitelist allows explicit selection of which providers are eligible at all – e.g. only Mistral, DeepInfra EU and Fireworks.
This allows a setup that uses multiple models flexibly but is guaranteed to hit only EU providers with a no-training policy. It's the practical middle path between "we only use Mistral" and "we use whatever runs." Important to know: OpenRouter itself is a US company. Data transfers run on Standard Contractual Clauses under Art. 46 GDPR. For banks, insurers and public authorities with low risk tolerance, either the EU in-region routing or a different solution is required.
Requesty from Frankfurt positions itself precisely as that alternative. Full EU hosting on AWS in eu-central-1, SOC 2 Type II, data processing agreement on request, support for Anthropic, Azure OpenAI, Google Vertex AI, AWS Bedrock, Mistral and Nebius – all exclusively via EU endpoints. Zero data retention, no training use, no caching. The OpenAI SDK is a drop-in replacement. For companies that, for good reasons, don't want a US gateway provider, this is currently the cleanest option.
Portkey is HIPAA and ISO 27001 certified and relevant for specific compliance scenarios, though less common in the German-speaking market.
A fourth variant that increasingly makes sense in larger organisations: building the internal AI gateway yourself. Keycloak for authentication, LiteLLM or Open WebUI as the gateway layer, freed-up providers behind that. More effort, but full control over logging, routing, budget limits and compliance policies. For companies with their own platform team and multiple LLM use cases, this usually becomes the more economical path after 12 to 18 months.
❓ Industry perspectives – what's right in a banking setup would be overkill in marketing SMBs
The compliance layer depends heavily on industry. Blanket statements like "the mid-market needs setup X" lead in practice to misinvestment in both directions – either unnecessary compliance overhead is built, or the regulatory bar is not met.
Banks operate under a tight regulatory environment of Section 25a KWG, DORA and the BaFin guidance. Typical high-risk applications are creditworthiness assessment, scoring, fraud detection and AML. BaFin expects Explainable AI (SHAP, LIME), complete documentation, segmentation of training and production environments, confidential computing and protection against model extraction and inversion attacks. Practical consequence: external US-cloud-based LLMs without deep integration are hard to justify under supervisory review. The best combination today is an EU setup (Mistral, Aleph Alpha/PhariaAI via STACKIT) plus self-hosted open-source models for sensitive areas. Credit decisions using opaque generative models are an absolute no-go.
Insurers fall under Section 23 VAG (Insurance Supervision Act), DORA, IDD, the draft MaGo and the EIOPA guidelines. Typical use cases include input management (document classification), claims automation, pricing and underwriting support. Fairness and anti-discrimination create particular challenges – Art. 9 GDPR for sensitive attributes combined with the General Equal Treatment Act produces a complex review situation. BaFin has repeatedly emphasised that high-risk AI is being actively supervised here. Opaque generative models are particularly problematic from a supervisory perspective, and automated claims handling must be logged, traceable and backed by human escalation paths.
Healthcare. C5 Type 2 certification is effectively a market entry criterion. Special protection under Art. 9 GDPR for health data, professional secrecy under Section 203 of the Criminal Code and the Social Code Book V create the highest regulatory bar in the German market. Consequence: cloud-based LLMs only with a cleanly executed data processing agreement, DPIA and end-to-end encryption – or more commonly, self-hosted in the hospital's own data centre. Microsoft Dragon Copilot, available in Germany since 2025, is an interesting borderline case for clinical deployment but still falls under the CLOUD Act.
Public administration. The F13 project of the state government of Baden-Württemberg has been rolling out Aleph Alpha-based AI systems since mid-2024. STACKIT plus PhariaAI is the de facto German public sector standard. The competent oversight bodies are the BfDI (Federal Commissioner) for federal authorities and the respective state data protection commissioner (LfDI) for state-level authorities. Article 3(3) of the German Basic Law (prohibition of discrimination) and the Fundamental Rights Impact Assessment under Art. 27 AI Act apply particularly strongly here.
Mid-sized companies outside regulated industries. This is where the pragmatic middle path is realistic and economically sound. Mistral Le Chat Teams at around €24.99 per user per month as the standard tool. OpenRouter with EU in-region and ZDR for model variety when multiple models are needed. Local Qwen 3.5 or Gemma 4 for sensitive document work. Data processing agreement under Art. 28 GDPR for every tool deployed, training opt-out verified. And: the training obligation under Art. 4 AI Act since February 2025 is relevant for every company, not just high-risk users.
📌 Three concrete setups – with a sample calculation
Three setups have emerged in practice that work for different maturity levels and industries.
Setup A – maximum sovereignty. Direct contract with Mistral or Aleph Alpha/PhariaAI via STACKIT. Images via FLUX (EU API or locally in the 4B variant). Document work as on-premises RAG with Qwen 3.5 or Gemma 4. Translation with DeepL Pro on EU hosting. Gateway built internally or via Requesty. Additional controls via BYOK, confidential computing and end-to-end encryption. The right choice for banks, insurers, healthcare and public authorities. Limited model access, but straightforward approval by legal and the audit committee.
Setup B – flexibility with guardrails. OpenRouter with a strict filter: EU in-region, zero data retention, training opt-out, provider whitelist on EU houses. Alternative: Requesty for strict EU-only requirements. Contractual situation documented with data processing agreement, Standard Contractual Clauses and Transfer Impact Assessment. Coverage includes Mistral, Aleph Alpha, FLUX, Claude (via Bedrock Frankfurt with ZDR) and Llama via DeepInfra EU. Access to many models, but only those that meet requirements. For most mid-sized companies outside heavily regulated industries, the sensible middle path.
Setup C – self-hosted open source. Own Kubernetes environment or hosted at an EU provider like STACKIT, OVHcloud or Hetzner. Models: Mistral Small 4, Gemma 4, Qwen 3.5, Llama 3.3, GPT-OSS-120b. Hardware depending on use case: from Mac mini M4 Pro in small offices to GPU clusters in large enterprises. Maximum effort, maximum control. Economically justified only above certain volumes, or for particularly sensitive data.
For most growing organisations, a hybrid variant is the best answer in practice. A sample calculation for a 15-person company: ten Claude Team licences at USD 25 per user (starting at five seats, via Bedrock Frankfurt with ZDR) for staff with demanding reasoning tasks. Five Mistral Le Chat Pro licences at €14.99 per user per month for standard office workflows. A small server with Gemma 4 running locally (one-time cost around €500) for sensitive document work. FLUX API for image generation. The total bill often comes in below 15 times ChatGPT Plus at the point of comparison – with a significantly better compliance position.
The exact composition varies. But the principle holds: not one provider for everything, but providers matched to the protection requirement and the task. Less elegant to present, but more defensible in review.
❓ What doesn't work and where the honest limits lie
First: EU models still have a gap to GPT-5 and Claude Opus on certain frontier benchmarks. For complex reasoning tasks, very long contexts, or exotic programming languages, the difference is noticeable. For 80 to 90 percent of typical mid-market use cases – summarisation, document analysis, standard code, email drafts, translation – the gap is barely relevant in practice. But anyone needing state-of-the-art for research and engineering tasks buys themselves a performance cap with a pure EU setup today.
Second: the cost and effort trade-off is real. A fully sovereign setup with Aleph Alpha PhariaAI, an own data centre and internal gateway costs many times more in initial effort and operations than "OpenAI API with Transfer Impact Assessment." For a 15-person tax firm, this can be the wrong investment – even if it would be the most regulatorily pristine. The art is to hit the protection level the industry requires, not the theoretically maximum one.
Third: self-hosting isn't "free" just because there are no API fees. Staff, hardware depreciation, maintenance, security patches, model updates and the know-how needed to run a productive LLM stack all have a price. In projects, we repeatedly see self-hosting business cases calculated too optimistically, because ongoing operational effort is underestimated. In smaller organisations, there's often no second person who can step in when the one responsible is sick or on holiday.
Fourth: the Data Privacy Framework is legally valid but politically and structurally fragile. The Data Privacy Framework (DPF) is the current EU-US adequacy decision allowing US companies to receive personal data from EU subjects without requiring Standard Contractual Clauses – if they self-certify compliance. The EU General Court confirmed it in September 2025 in the Latombe case, but the appeal to the European Court of Justice is pending. The Trump administration dismissed three of five Democratic members of the Privacy and Civil Liberties Oversight Board in January 2025 – the very body with central responsibilities in the DPF redress mechanism. The European Data Protection Board had already called for a reassessment within three years back in November 2024. Max Schrems has announced a Schrems III challenge. Anyone relying exclusively on the DPF is building on demonstrably unstable foundations. Standard Contractual Clauses plus Transfer Impact Assessment remain mandatory as a second safety net.
Fifth, and most importantly: case law and supervisory practice keep evolving. What is "in principle feasible" today may be assessed differently in 18 months – see Microsoft 365. Anyone building an AI architecture that depends on a single provider is making themselves dependent on political decisions in Brussels, Washington and Luxembourg that nobody can predict precisely. Replaceability of the model layer should be a design principle from day one.
❓ What really matters in 2026
The good news from the current picture: anyone wanting to deploy AI in 2026 is no longer facing a binary choice between "accept compliance risk" and "abstain." The EU providers have grown up. Mistral is catching up with the US giants in enterprise deployment – not on frontier benchmarks, but in rolled-out production use. Aleph Alpha has reinvented itself and is now in the field nationwide through the STACKIT partnership. The EU AI Act, despite all criticism of details, provides a framework that creates legal certainty. The DSK has issued guidance three times, BaFin has positioned itself. You no longer have to guess what applies.
The key message for companies that have so far held off on AI for data protection reasons: the path is there. You just have to walk it cleanly.
And the strategically most important insight for anyone making architectural decisions right now: compliance must not be an afterthought. The model layer can be swapped. Legal foundations cannot. Anyone who today has a cleanly answered five-layer analysis, treats model replaceability as a design principle and knows the protection level of their industry has an architecture that will carry them through the next five years. Anyone relying on a single provider and a single legal regime has made an architectural promise nobody can keep.
The question is no longer whether mid-sized companies in the German-speaking region should deploy AI. The question is which architecture. And here, in 2026, we have significantly more options than the market discussion would suggest.
---
If you're currently planning AI projects in your company, or putting an existing setup through a compliance review, the five layers are a good starting point for the internal discussion. If concrete questions come up along the way – particularly around industry-specific positioning – feel free to reach out.
---
📌 A note on the character of this article
This post is a technical and strategic situational report from project practice – not legal advice. The positioning presented here on GDPR, EU AI Act, CLOUD Act, BaFin guidance and supervisory practice reflects research as of April 2026. The legal situation continues to develop quickly – both through new rulings (the Schrems III proceedings, the DPF appeal to the ECJ, further DSK guidance) and through shifting supervisory practice. Individual cases may be assessed differently than the general patterns described here.
Before making concrete architectural, contractual or procurement decisions based on this article, please coordinate with your data protection officer and legal department – and in regulated industries, additionally with compliance and, where appropriate, external counsel specialising in IT law. For banks, insurers and healthcare providers, additional frameworks (Section 25a KWG, Section 23 VAG, DORA, Section 203 of the Criminal Code, BSI IT baseline protection, C5) apply alongside GDPR, and their interaction has to be assessed case by case. This article cannot replace that review and is not intended to. It's intended to prepare it better.