EU Big Tech’s Tentative Deal to Simplify AI Rules

After nine hours of negotiations, the EU reached a provisional deal to reshape the world's most ambitious AI law. Supporters call it practical. Critics call it a capitulation. The truth sits somewhere in between.

Z Patel

The EU AI Act simplification deal reached is the most significant revision to European AI regulation since the EU AI Act entered into force in August 2024. EU Council member states and European Parliament lawmakers concluded nine hours of negotiations to reach a provisional agreement on the Digital Omnibus on AI — a package of targeted amendments designed to simplify compliance, reduce administrative duplication, and delay certain enforcement deadlines. The deal delays high-risk AI system obligations by up to 16 months, extends regulatory exemptions to a group of smaller companies, prohibits non-consensual AI-generated sexual imagery, and clarifies how the EU AI Office supervises general-purpose AI models. The agreement still requires formal endorsement from EU governments and the Parliament before it is law.

What’s Happening & Why It Matters

What the Digital Omnibus on AI Actually Changes

The EU AI Act simplification deal began as a proposal from the European Commission in November 2025. The Commission positioned it as one of ten “Omnibus” packages aimed at reducing regulatory complexity across sustainability, investment, agriculture, digitalisation, defence, and AI. The AI-specific package responded to a concrete industry complaint.

Companies operating in sectors like medical devices, machinery, toys, and watercraft faced an overlap problem. The EU AI Act had created new AI-specific compliance obligations — but those companies already operated under detailed sectoral legislation. Dual compliance was generating a significant administrative burden without producing clear additional safety benefits.

The provisional deal addresses that problem through a formal mechanism for resolving conflicts between the AI Act and sectoral laws — allowing companies to comply with whichever framework is more specific rather than both simultaneously.

EU Council Presidency holder Marilena Raouna — Cyprus’s Deputy Minister for European Affairs — confirmed the intention. “Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs,” she said.

The Core Change: High-Risk AI Deadline Delayed to December 2027

The most commercially significant element of the EU AI Act simplification deal is the delay to high-risk AI system obligations. Previously, key provisions governing high-risk AI systems — those involving biometrics, critical infrastructure, law enforcement, and employment — were scheduled to apply from 2 August 2026. The provisional deal sends that deadline to 2 December 2027 — an extension of approximately 16 months.

European Parliament rapporteur Arba Kokalari, representing the Internal Market committee, directly justified the delay. “We are not weakening any safety rules; we are clarifying the rules for companies in Europe,” she said. “The current state is that companies are confused about whether they should follow the AI Act or sectoral legislation.

There is legal and commercial uncertainty, which is what we are trying to prevent.” The delay applies specifically to systems where the European Commission has not yet confirmed that the required harmonised standards and testing tools are available. The Commission can adjust the timeline by up to 16 months once it confirms those standards exist — meaning the delay is conditional rather than automatic.

Who Benefits: Extending SME Exemptions to Larger Companies

The EU AI Act simplification deal expands the scope of regulatory exemptions. The original AI Act granted reduced compliance requirements to small and medium enterprises (SMEs). The provisional deal extends most of those same exemptions to small mid-cap companies (SMCs) — businesses with between 250 and 1,500 employees. The deal represents a meaningful expansion.

The EU’s SME definition covers companies with fewer than 250 employees. Adding SMC coverage brings several thousand additional European companies into the lighter-touch compliance tier. The measure responds directly to the competitiveness concerns noted by European leaders in the Draghi Report on EU competitiveness, published in October 2024. That report identified regulatory complexity as a structural disadvantage for European companies competing against US and Asian rivals.

New Prohibition: Non-Consensual AI Sexual Imagery

The provisional deal adds a new provision not present in the original AI Act. Co-legislators agreed to prohibit AI practices involving the generation of non-consensual sexual and intimate content. This is the first time the AI Act explicitly addresses deepfake pornography as a prohibited AI practice rather than leaving it to national criminal law or the Digital Services Act.

The addition is directly relevant to high-profile cases, including the AI-generated images of Italian Prime Minister Giorgia Meloni that circulated in late April 2026 — a case TF covered as part of the AI accountability wave of that week. It is the political pressure generated by similar incidents involving public figures and private individuals across multiple EU member states. The explicit prohibition gives the EU AI Office direct enforcement authority over the category — rather than leaving it to individual national regulators interpreting general harm provisions.

Sandbox Deadline Extended and Deepfake Transparency Accelerated

Two further timeline changes are in the provisional deal. The deadline for EU member states to establish AI regulatory sandboxes — controlled testing environments for new AI systems — jumps from 2 August 2026 to 2 August 2027. Regulatory sandboxes are a critical infrastructure component for the AI Act.

They allow companies to test AI systems under regulatory supervision before full market deployment. The one-year extension gives national regulators more time to build the institutional capacity required by those sandboxes. In the opposite direction, the provisional deal shortens the transparency window for labelling AI-generated content.

Providers of AI systems that generate synthetic content — including text, images, audio, and video — must implement transparency solutions within three months, reduced from the previous six-month grace period. The new deadline is 2 December 2026. The change directly affects AI image generators, video synthesis tools, and large language models that produce content users may mistake for human-generated work.

The AI Office’s Powers Are Clarified — and Expanded

The provisional deal updates the division of supervisory authority between the EU AI Office and national regulators. The AI Office holds primary jurisdiction over general-purpose AI models — the foundational large language models and multimodal systems produced by Google DeepMind, Anthropic, OpenAI, Meta, and others.

The deal clarifies when the AI Office’s jurisdiction applies and when national authorities retain competence — specifically carving out law enforcement, border management, judicial authorities, and financial institutions as areas where national regulators are the primary supervisory body.

The AI Office’s powers are simultaneously reinforced. The deal gives the Office explicit authority to supervise AI systems where the same provider develops both the general-purpose model and the downstream application built on it — addressing a gap in the original framework where integrated systems fell between regulatory responsibilities.

Civil Society Perspective: A Capitulation to Big Tech?

The EU AI Act simplification deal has attracted sharp criticism from civil society and some academic researchers. Critics argue that the delay to high-risk AI obligations reduces pressure on the largest AI developers at precisely the moment enforcement would have begun to bite.

The extension of SMC exemptions, they argue, creates a middle tier of companies that operate under lighter-touch rules than the Act originally envisioned. The machinery exclusion — removing a category from the AI Act scope because it already has sectoral legislation — opens a precedent that other sectors may invoke.

The concern is structural. Once the principle of sectoral overlap as justification for AI Act exclusion is established, industries facing compliance costs have a clear template for seeking their own exclusions. That dynamic could progressively hollow out the Act’s scope without any single decision constituting a decisive departure from its original intent. European Parliament members from the Greens and left groups have described the deal as “caving to Big Tech pressure.” The Commission and Council maintain that it is a necessary clarification that preserves the Act’s safety architecture intact.

TF Summary: What’s Next

The provisional deal requires formal endorsement from EU governments and the European Parliament before it is binding law. That process involves a legal and linguistic revision followed by a formal vote — expected to conclude within the next few weeks. The AI Office will also need to publish updated guidance on the clarified supervisory framework, the implications of the sandbox extension, and the accelerated deepfake transparency deadline. Companies with high-risk AI systems currently preparing for the 2 August 2026 deadline should note the conditional nature of the delay — the Commission must confirm that necessary standards are unavailable before the extension applies.

MY FORECAST: The EU AI Act simplification deal will pass formal endorsement without significant revision. The nine-hour negotiation has already produced a text that reflects genuine compromise between the Parliament’s safety-preservation instincts and the Council’s competitiveness priorities. The more interesting development to watch is the machinery exclusion precedent. Within 12 months, at least two additional industrial sectors — most likely automotive and healthcare devices — will formally petition for equivalent exclusions on the grounds of sectoral overlap. If those petitions succeed, the AI Act’s scope will have narrowed meaningfully without any formal amendment to its core prohibitions or obligations. That outcome would validate critics’ concerns while remaining technically consistent with the Commission’s claim that safety rules are undiminished.


[gspeech type=full]

Share This Article
Avatar photo
By Z Patel “TF AI Specialist”
Background:
Zara ‘Z’ Patel stands as a beacon of expertise in the field of digital innovation and Artificial Intelligence. Holding a Ph.D. in Computer Science with a specialization in Machine Learning, Z has worked extensively in AI research and development. Her career includes tenure at leading tech firms where she contributed to breakthrough innovations in AI applications. Z is passionate about the ethical and practical implications of AI in everyday life and is an advocate for responsible and innovative AI use.
Leave a comment