The European Parliament's civil liberties committee approved a draft amendment this week that would outlaw any AI system capable of generating realistic sexual images of identifiable people without their consent. The timing is notable: it comes just days after Anthropic announced it would sign the EU Code of Practice, the voluntary framework that sits alongside the more binding AI Act. Together, the two moves sketch the shape of what AI regulation in Europe actually looks like in practice: a hard statutory layer targeting the most harmful uses, and a softer voluntary layer bringing frontier AI companies into a structured accountability relationship with regulators.
The deepfake ban is more targeted than it might sound. The proposed amendment does not ban all AI-generated imagery; it targets non-consensual synthetic sexual content specifically, closing a loophole that critics had identified in the original AI Act's treatment of manipulated media. Several EU member states have already moved on this issue through national legislation, but a common EU-level standard has been missing. The effect of an EU-wide prohibition would be to create a single enforceable baseline that applies to any platform with European users, including those based in the US.
Anthropic's Code of Practice commitment is a different kind of move. The Code is a voluntary instrument developed by the EU AI Office with input from AI companies, civil society, and researchers. Signing it does not create the same legal obligations as the AI Act, but it does lock in a set of transparency, safety, and incident-reporting commitments that will be publicly visible and monitorable. For a company that has positioned itself as the safety-focused lab in a competitive field, signing was arguably the minimum expected move. The more interesting question is what happens when the commitments in the Code are tested against real deployment decisions.
What makes these two developments worth reading together is what they reveal about the EU's regulatory strategy. Rather than trying to write comprehensive rules in advance of knowing what AI systems would do, the EU has built a layered structure: strict prohibitions at the extremes, risk-tiered requirements for high-stakes applications, and voluntary frameworks at the frontier where the technology is still evolving too quickly for detailed rules. The deepfake ban amendment sits in the first layer. The Code of Practice sits in the third. The middle layer, with its deadlines on high-risk AI deployment, arrives in full by August 2026.
The contrast with the US is pointed. Washington has spent much of March debating a broad federal AI framework and whether it should preempt state legislation. The content of that framework remains largely unspecified. Europe, by contrast, is filling in specific provisions of a law that has already been enacted, adding targeted amendments, and signing frontier companies into accountability structures. The EU approach has attracted criticism, particularly from AI companies worried that the compliance burden will disadvantage European developers relative to American and Chinese competitors. But it does at least have the advantage of specificity: at any given point, you can look at the AI Act and tell exactly what it requires and when.
The deepfakes question is also one where the gap between policy and enforcement is already visible. Non-consensual synthetic sexual imagery is widespread and has already caused serious harm to real people, disproportionately women. Passing a law is not the same as removing the content or identifying the perpetrators. The EU amendment would require platforms to build detection and removal systems, which is a harder technical and operational problem than the legislative text tends to acknowledge. Whether the law produces actual accountability or mainly symbolic compliance will depend on implementation choices that have not yet been made.