← Front Page
AI Daily
Regulatory scales tilting under pressure from industry lobbying
Policy • May 8, 2026

Europe Blinks: The AI Act's Great Rollback

By AI Daily Editorial • May 8, 2026

After years of debate, careful drafting, and considerable diplomatic friction, the European Union's AI Act was supposed to be the world's most comprehensive framework for regulating artificial intelligence. On Thursday morning, before many of its key provisions ever took effect, EU legislators agreed to water it down. The rules on high-risk AI systems, originally due to apply in August 2026, have been postponed until December 2027. Industrial machinery has been largely exempted. The window for watermarking AI-generated content has been stretched. What began as a landmark text is becoming something more modest.

The deal, struck after late-night negotiations between the European Parliament and EU member states, is being framed officially as a "simplification" effort. The European Commission's "digital omnibus" package had already set the stage for rolling back rules across several technology domains, and AI was caught in that same current. The official language speaks of "legal certainty" and "smoother harmonised implementation." The less diplomatic reading is that industry lobbied hard, and industry won.

Germany's influence was particularly visible. Top officials, including Chancellor Friedrich Merz, had pushed explicitly to keep industrial giants like Siemens and Bosch outside the AI Act's scope. They succeeded. Under the new agreement, AI systems used in industrial machinery only need to comply with existing sectoral safety rules, not the additional layer of AI-specific requirements. This is a significant carve-out for one of Europe's largest industrial economies.

The pressure did not come only from European capitals. Reports indicate that the US government had also been pushing Brussels for further deregulation, and the political alignment between the Trump administration's approach to AI oversight and the EU's retreat is not coincidental. The pattern emerging across both sides of the Atlantic is one where governments announce ambitious AI governance frameworks, then quietly retreat from them when the economic stakes become clear.

Halfway around the world, Colorado is playing out the same story at state level. Colorado Senate Bill 24-205, a first-in-the-nation framework requiring AI developers and businesses to take reasonable steps to prevent discrimination in employment, housing, healthcare, and insurance decisions, is being replaced by something far narrower. Senator Robert Rodriguez, who sponsored the original legislation, described the retreat with unusual candour. "We could have probably built a wing here at the Capitol with the amount of money that's been spent on this topic," he told fellow lawmakers. The new bill, SB-189, strips out most of the substantive protections and replaces them with a disclosure requirement. Industry spending did the rest.

There is, to be fair, one concrete advance in the EU agreement: a ban on so-called "nudifier" applications, which use AI to generate non-consensual explicit imagery of real people. The prohibition covers placing such systems on the market, deploying them, and creating child sexual abuse material via AI. This is not a small thing; these tools have caused real harm to real people. The fact that it took a comprehensive regulatory framework to finally produce this specific, unambiguous rule says something about how difficult it is to get AI governance to bite on anything specific.

Liberal and centrist lawmakers in the European Parliament have already signalled they want to go further with deregulation, framing the current deal as insufficient. Industry lobby groups are saying the same, from a different direction. There is now pressure from multiple directions to pull the AI Act further toward what critics are calling a framework with the form of regulation and the substance of self-governance.

What this week's events reveal is the gap between announcing AI rules and enforcing them. The EU AI Act was always going to face implementation challenges, but few observers expected it to be renegotiated before it was even tested. The same economic pressures that make AI adoption feel inevitable to companies make AI regulation feel costly to the governments competing for investment and industrial advantage. Neither Brussels nor Denver has found a way to hold that tension. The result, in both cases, is rules that are narrower than the problem they were designed to address.

Sources