Connecticut's House of Representatives voted 131 to 17 on Friday to pass Senate Bill 5, one of the most comprehensive artificial intelligence regulatory frameworks yet enacted by any US state. The bill, which passed the Senate earlier with a 32-4 margin, now goes to Governor Ned Lamont, who has signalled he plans to sign it. The vote ends years of false starts, a near-miss last session, and a governor who had spent most of that time threatening to veto exactly the kind of legislation he is now preparing to endorse.
The law, rebranded the Connecticut Artificial Intelligence Responsibility and Transparency Act in the final hours of debate, covers a broad range of ground. It addresses how AI can be used in employment decisions, governs AI tools deployed within state agencies, places new requirements on AI chatbot interactions with minors, and establishes a regulatory sandbox that allows companies to test new AI products without full regulatory exposure. It also expands AI literacy programmes for teachers, promotes the Connecticut AI Academy among unemployed workers, and offers AI education support for small businesses.
The bill's bipartisan passage is itself the news. AI regulation has been a partisan flashpoint at the federal level and in many states, but Connecticut's House Republicans came out largely in support. Members cited the impact on children, the inadequacy of the free market as a protective mechanism, and impatience with waiting on Washington as reasons to vote yes. "I don't want to wait for the federal government. I don't want to wait for the state government. I don't want to wait for regulators, and I don't want to wait for the free market," said one Republican legislator from Bristol. "This is way too important for the youth of this country."
The deal that brought Lamont on board is instructive. Previous versions of the bill were derailed by the governor's concern that over-regulation would harm businesses and deter innovation. The final compromise added the regulatory sandbox (allowing companies to trial new products under state oversight rather than being immediately subject to full enforcement), incorporated governor-requested provisions on youth social media and AI chatbot use, and bundled in parts of two separate bills he had sponsored. In short, the governor got enough of his own agenda included to call the result his.
The context for Connecticut's action is a vacuum at the federal level. The Trump administration has been actively discouraging states from passing AI regulation, arguing that a single federal standard is preferable to a patchwork of state laws. Connecticut's passage of Senate Bill 5 is a direct counter to that argument. Several lawmakers explicitly invoked the federal inaction as a reason to act, and the state attorney general framed the vote as "an important first step" given that "neither state nor federal law has kept pace" with AI's development.
Some legislators raised the challenge that makes AI regulation structurally difficult anywhere: the technology is evolving faster than any rule-making process can track. One Republican member, who disclosed he had used AI to generate the questions he asked during the floor debate, characterised parts of the bill as "theater." That criticism has real force. Rules written about AI products that exist today will need substantial revision within years, if not months.
Still, the Connecticut law represents something real: a state legislature that concluded the case for waiting had run out. Whether other states follow, and whether the Trump administration moves to preempt state action with federal legislation, will determine whether this vote is the beginning of a serious regulatory trend or a lonely outlier. For now, Connecticut has chosen not to find out.