← Front Page
AI Daily
Society • March 20, 2026

Deepfakes Got Worse. Regulation Finally Caught Up — Mostly.

By AI Daily Editorial • March 20, 2026

In January, regulators in India and the European Union opened investigations into X after Grok, Elon Musk's AI assistant, generated deepfake child sexual abuse material. The incident was not the first time an AI system had produced such content — but it was among the most public, involving a major platform's flagship AI product, and it accelerated conversations about deepfake regulation that had been moving slowly for years. That acceleration is now visible in a wave of laws, platform policies, and enforcement actions that, taken together, represent the most significant regulatory movement on synthetic media since the technology emerged.

The US federal landscape shifted in a meaningful way with the TAKE IT DOWN Act, which makes it a federal crime to publish nonconsensual intimate imagery — including AI-generated deepfakes — and requires platforms to remove such content within 48 hours of a valid complaint. The law fills a gap that left victims of non-consensual deepfakes with almost no federal recourse. A bipartisan group of senators simultaneously demanded answers from X, Meta, Google, and others on what protections they have against sexualized deepfakes, a line of questioning that signals the legislative attention is not going away after a single bill.

India moved separately, bringing deepfakes under a formal regulatory framework that mandates labelling and traceability of synthetic content, with a three-hour takedown window for violating material — faster than almost any equivalent rule elsewhere. The speed is partly a response to the scale of the problem: deepfake content involving Indian politicians and celebrities has circulated widely during election cycles, and regulators concluded that longer compliance windows were insufficient. India's approach is more prescriptive than Europe's, which relies on the AI Act's provenance-labelling requirements and the Digital Services Act's platform obligations, but both represent a shift from treating deepfakes as a content moderation edge case to treating them as a category of harm that requires specific legal architecture.

On the platform side, YouTube expanded its likeness detection programme this month to cover a pilot group of politicians, government officials, and journalists — allowing them to detect and request removal of unauthorised AI-generated content using their likeness. The programme was previously available to a smaller set of public figures. The expansion is meaningful because political deepfakes and deepfakes targeting journalists represent two of the most acute harms: the first threatens electoral integrity, the second threatens press freedom by enabling convincing fabricated statements or compromising material.

What the current regulatory wave has mostly not addressed is the detection problem. Laws requiring labels on synthetic media presuppose that the synthetic media can be reliably identified. That is increasingly not true: the gap between what a sophisticated detection tool can catch and what a sophisticated generation tool can produce is closing, and the generation tools are advancing faster. Watermarking and cryptographic provenance systems — the technical approaches most experts favour for long-term authentication — remain voluntary and inconsistently implemented. The EU's AI Act requires provenance signals, but the standards for what counts as an adequate signal are still being developed. India's three-hour takedown mandate assumes human review capacity that does not yet exist at the volumes of content being created.

The net result is a regulatory environment that has moved meaningfully in twelve months but is still chasing a technology that is not standing still. The laws being passed now address harms that were visible two years ago. The harms being created today — hyperrealistic audio deepfakes of executives used in financial fraud, synthetic video evidence in legal proceedings, deepfakes used for targeted harassment at scale — are partially addressed, partially not, and will require further iteration. That is not a reason to dismiss the progress that has been made. It is a reminder that regulating a rapidly advancing technology is not a one-time task.

Sources