The Trump administration released its national AI policy framework on March 20, and the immediate reaction split cleanly along the lines you would predict. CNBC covered it as a significant federal move: a six-pronged legislative outline covering safety guardrails, child-safety rules, data-centre permitting, energy standards, and a clear federal preemption of state AI laws. Bloomberg's opinion desk came to a very different conclusion, running a piece titled "The White House's AI Plan Is Anything But" three days later. Both accounts are looking at the same document. The gap between them is actually the most informative part of the story.
What the framework does is establish intent. It proposes that federal law should be the ceiling and floor for AI regulation in the United States, preventing states from running their own programmes. This is the most practically significant element, because it directly targets laws like California's SB 53 and New York's RAISE Act, which impose transparency and impact-assessment requirements on AI developers that the industry has lobbied against. If the preemption holds up in Congress and in court, it effectively ends a multi-front regulatory battle and replaces it with a single negotiation at the federal level.
What the framework does not do is specify much about how any of its proposals would actually work. The Bloomberg critique is that the document reads more like a list of goals than a policy: broad commitments to safety and security without enforcement mechanisms, permitting reform without timelines, energy standards without numbers. That is a fair read of its current form. Legislative outlines are not laws, and the distance between a White House framework and an enacted bill is exactly where most AI governance proposals have died in the past two years.
The state-versus-federal tension is not new, but it has grown sharper. In December, Trump signed an executive order conditioning broadband infrastructure funding on states standing down from AI rulemaking. That was a harder-edged instrument than this framework. States have not stood down. California's attorney general has indicated the state will continue enforcing its existing AI transparency requirements regardless of federal signals. The EU AI Act, whose provisions are now largely in force, is meanwhile creating its own baseline for any company doing business in Europe, regardless of what Washington does.
The framework's silence on a few things is worth noting. There is nothing substantive on open-source AI, which is the category where federal policy could have the most direct effect on both innovation and risk. There is nothing on export controls, despite ongoing restrictions on chip sales to China being a live and contested issue. And there is very little on the labour implications of AI adoption, which several states were beginning to address in their own legislation.
The honest assessment is that the framework reflects where the political coalition behind it can agree, which is mostly on what it opposes: state regulatory fragmentation, and any regulatory posture that could be characterised as anti-innovation. What it is for, beyond industry-friendly federal preemption, is harder to read from the text. Whether that changes when the framework moves to actual legislation, and how much the lobbying battle over the details reshapes it, is the story to watch through the rest of 2026.