The Safety Line in the Sand: Why the Pentagon Blacklisted Anthropic
The US military signed deals with seven AI companies this week, but not the one whose tools its own staff prefer. The sticking point is Anthropic's AI safety guardrails, and the dispute raises a question every AI company will eventually face: what happens when safety principles meet the world's largest buyer?
Read full story →Connecticut Draws the Line: A State Stops Waiting for Washington on AI
After years of failed attempts and a governor's veto threat, Connecticut passed one of America's most comprehensive AI laws with overwhelming bipartisan support. The deal that made it possible reveals what it takes to get AI regulation across the finish line, and what the Trump administration's push for federal preemption is up against.
Read full story →China's AI Models Are Nearly Matching America Token for Token
New data shows Chinese AI models processing close to the same traffic volume as US counterparts on major routing platforms. Meanwhile, the price gap between Chinese and US models has nearly closed. What looked like a capability race is beginning to look like a market, and institutional investors are taking notice.
Read full story →The Military Wants AI Without Principles. That Should Worry Everyone.
The Pentagon's exclusion of Anthropic reveals something important about how large institutions relate to AI safety constraints: they treat them as bugs, not features. When the most safety-conscious major AI lab gets blacklisted precisely because of its safety standards, the incentives facing the entire industry shift in a direction that should concern anyone thinking about long-term outcomes.
Read opinion →