
SSL G Bus Compressor Plugin Update: Why Sidechain HPF and Mix Knob Change Everything
February 26, 2026
Podcast Recording Editing Workflow 2026: The Complete Production Guide from Riverside to Descript
February 27, 2026Five working days. That’s all the European Commission gave stakeholders to review a 180-page draft that could fundamentally reshape how every AI company in the world handles personal data. The EU Digital Omnibus, quietly proposed in November 2025, doesn’t just tweak GDPR — it rewrites the rules for AI training, narrows what counts as personal data, and potentially delays enforcement of the AI Act’s toughest provisions. If your company touches AI and European data, the GDPR AI amendments 2026 package demands your immediate attention.

After months of behind-the-scenes lobbying and a growing tension between Europe’s privacy-first reputation and its AI competitiveness ambitions, the Commission has laid its cards on the table. The result? A package that privacy advocates call a “sellout” and industry players cautiously welcome. Let’s break down the five changes that will define how AI companies operate in Europe for the next decade.
1. Article 88c: Legitimate Interest for AI Training — GDPR AI Amendments 2026’s Biggest Shift
The single most consequential change in the Digital Omnibus is the proposed Article 88c, which would create an explicit legitimate interest legal basis for processing personal data to develop and train AI models. Until now, companies have relied on a patchwork of legal arguments — consent, legitimate interest balancing tests, or the research exemption — to justify scraping and processing massive datasets for model training.
Article 88c changes the game by codifying what Big Tech has been arguing for years: that AI model development is itself a legitimate interest that can override individual data protection rights under certain conditions. The article introduces specific safeguards — anonymization requirements post-training, data minimization obligations, and mandatory transparency disclosures — but critics argue these guardrails have significant gaps.
The precedent was set in May 2025 when the Irish Data Protection Commission (DPC) approved Meta’s plan to train its large language models using European user data, with an opt-out mechanism as the primary safeguard. Article 88c essentially codifies and broadens this approach across the entire EU.
2. Narrowed Personal Data Definition: Pseudonymous Data Loses Protection
Perhaps the most technically subtle but far-reaching change is the proposed narrowing of what constitutes “personal data” under GDPR. Currently, pseudonymous identifiers — hashed email addresses, device IDs, cookie identifiers — are considered personal data because they could be linked back to an individual with additional information. The Digital Omnibus proposes that data which has been pseudonymized to a degree where re-identification requires “disproportionate effort” would fall outside GDPR’s scope entirely.
This is not a minor definitional adjustment. As TechPolicy.Press reported, it would effectively remove protections from vast swathes of data currently governed by GDPR. For AI companies, this means training datasets containing pseudonymous identifiers could potentially be processed without any GDPR obligations — no consent required, no data subject rights, no breach notification requirements.
The EDPB and EDPS have explicitly flagged this as a critical concern, warning that the “disproportionate effort” standard is vague and could be exploited to reclassify large volumes of clearly personal data as non-personal.
3. AI Act Enforcement Delays: High-Risk Deadline May Slide to December 2027
The AI Act’s high-risk system requirements were supposed to bite on August 2, 2026. The Digital Omnibus signals that this deadline may be pushed back to December 2027 — a 16-month extension that has both practical and strategic implications. Practically, many companies are nowhere near ready. A recent industry analysis found that the dual compliance burden of GDPR and the AI Act has left many organizations struggling to reconcile overlapping but inconsistent requirements around risk assessment, transparency, and human oversight.
Strategically, the delay reveals a deeper tension. Europe rushed to be first with comprehensive AI regulation. Now, with the US taking a deregulatory approach under a December 2025 executive order that challenged state-level AI laws, and with China quietly building its own AI governance framework, the Commission appears to be recalibrating. The concern among privacy advocates is that “delay” is code for “water down.”
4. New Opt-Out Rights That May Not Actually Work
The Digital Omnibus introduces a new opt-out mechanism for individuals whose data is used in AI training. On paper, this sounds like a reasonable compromise — you can’t stop companies from using data for AI, but you can opt out. In practice, as the EDPB’s Joint Opinion of February 11, 2026 pointed out, opt-out mechanisms fundamentally don’t work for data that has already been scraped from the internet.
Consider the mechanics: if a company has already ingested your blog posts, social media comments, or forum contributions into a training dataset, how do you opt out after the fact? The data has already been processed. The model weights have already been influenced. Even if the raw data is deleted, the patterns extracted from it persist within the model. Max Schrems, the privacy activist whose legal challenges have shaped European data protection law, called this approach “Trump’ian lawmaking” — giving the appearance of rights while making them practically unenforceable.
The proposed new Article 9(2)(k) adds another layer of complexity by creating an exemption for processing special-category data (health, biometrics, political opinions) for AI training purposes, subject to “appropriate safeguards.” Critics argue this essentially creates a backdoor through one of GDPR’s strongest protections.

5. EDPB and EDPS Push Back: Regulators Sound the Alarm
The most significant institutional response came on February 11, 2026, when the EDPB and EDPS issued a Joint Opinion that amounted to a detailed rejection of several key Digital Omnibus proposals. The regulators didn’t mince words: the narrowing of the personal data definition was “premature and potentially harmful,” the opt-out mechanism was “insufficient for data subjects whose information has already been collected,” and the legitimate interest basis for AI training lacked adequate safeguards.
What makes this pushback significant is that the EDPB and EDPS are not outsiders. These are the EU’s own data protection authorities. Their opposition creates a genuine constitutional tension within the EU governance framework. The Commission is essentially asking the legislative bodies to override the expert judgment of the institutions specifically created to protect data rights.
The Joint Opinion also raised procedural concerns about the pace of legislative action. The Commission provided only five working days for stakeholder review of a 180+ page draft — a timeline that the regulators called “incompatible with meaningful democratic scrutiny.”
What This Means for Your Organization
The Digital Omnibus is still a proposal, not law. But the direction is clear. European policymakers are actively seeking to make GDPR more AI-friendly, even at the cost of weakening some privacy protections. Companies should prepare for a regulatory environment where AI training gets broader legal cover, but enforcement penalties remain severe — up to 35 million euros or 7% of global turnover, whichever is higher.
Meanwhile, the global picture is fracturing further. California’s SB 243 AI chatbot regulations took effect in January 2026, and multiple US states are pursuing their own frameworks even as the federal government pushes for preemption. Companies operating across jurisdictions face a compliance patchwork that will only grow more complex.
The smartest approach right now is to build your AI governance framework around the strictest applicable standard, monitor the Digital Omnibus legislative process closely, and ensure your data processing documentation can adapt quickly when the final rules drop. Whether the August 2026 deadline holds or slips to 2027, the companies that have their compliance architecture in place will have a significant competitive advantage over those scrambling at the last minute.
Navigating AI compliance across EU and global frameworks is complex — and the rules are changing fast. If you need help building a data governance strategy that stays ahead of the regulatory curve, let’s talk.



