"Pollutant": The Pentagon's Case Against Anthropic Turns Ugly
The Pentagon's CTO called Claude a "pollutant" to the defence supply chain. Anthropic responded by seeking an emergency stay from an appeals court. With both sides hardening, this is no longer a contract dispute — it's a test of whether a US company can be designated a national security threat for refusing to remove ethical constraints from its own product.
Read full story →Anthropic Ships Its Most Capable Model Yet — and Doubles Down on Agents
Claude Opus 4.6 introduces "agent teams" — parallel subagents tackling subtasks simultaneously — alongside a 1 million token context window and top scores on agentic coding benchmarks. The timing is pointed: Anthropic's most autonomous-capable model yet arrived in the same weeks the Pentagon argued the company was dangerously over-cautious about autonomous AI.
Read full story →The Agentic Workplace Is Here. The Numbers Are Starting to Show It.
Three independent reports from Google Cloud, Nvidia, and Microsoft landed this week with strikingly similar findings: agentic AI is doing real work in real companies, with real numbers attached. PepsiCo is identifying 90% of production issues before they happen. Danfoss cut order response time from 42 hours to near-real-time. The pilot era is ending — unevenly, but unmistakably.
Read full story →China's Open-Source Gambit Is Working
A year after DeepSeek's surprise, Chinese labs have turned a moment into a strategy: flood the world with competitive open-weight models, free or near-free, and erode the commercial moat US closed-model vendors depend on. Alibaba's Qwen3.5, Moonshot's Kimi K2.5, Baidu's Ernie 4.5 — the releases keep coming. If an enterprise can self-host a capable Chinese model, why pay OpenAI's prices?
Read full story →Who Pays for AI's Power Hunger?
Meta has signed 6 gigawatts of nuclear deals. California is reconsidering a 50-year moratorium on new reactors. And CNBC is asking the question the industry would rather not answer: when Big Tech's energy buildout strains the grid, who actually foots the bill — shareholders, or ordinary ratepayers? The answer is increasingly: both, and the second group didn't get a vote.
Read full story →The AI Jobs Fear Is Real. So Is the Nuance.
Entry-level hiring in AI-exposed roles is down 13%. A major CEO is predicting 30%+ graduate unemployment. Jack Dorsey cut his workforce and economists are still arguing about whether AI did it or he did. Anthropic quietly published its own labour market research. Multiple perspectives this week on the question nobody has a clean answer to.
Read full story →AI Is Joining the Lab. Scientists Are Cautiously Impressed.
GPT-5 identified the mechanism behind a puzzling immune cell change — in minutes, from a chart — that had stumped researchers for months. Bloomberg reports AI is accelerating climate modelling. DeepMind partnered with the US Department of Energy on AI for scientific discovery. The hype is finally getting experiments attached to it.
Read full story →AI's Copyright Crisis Has No Clear Ending
The Supreme Court won't rule on AI copyright ownership. Music publishers are suing Anthropic for $3 billion. YouTubers are suing Snap. Over 70 infringement cases are in the courts. The fair use question — whether training on copyrighted data is legal — remains unanswered at the highest level, and the uncertainty is compounding for everyone involved.
Read full story →Spotify's Best Developers Haven't Written Code Since December. Now What?
Spotify's senior engineers now direct AI agents rather than write code directly — a productivity leap that raises a harder question: if expert developers stop writing code, how does the next generation learn? Anthropic's own research flags the skill-formation risk, open source maintainers are seeing quality decline in AI-assisted contributions, and "vibe coding" is democratising software creation in ways that blur the line between building and prompting.
Read full story →Doctors Want AI in the Clinic. Just Not as a Chatbot.
Both OpenAI and Anthropic launched dedicated healthcare platforms this year, targeting clinical decision support, documentation, and trial enrolment — not patient-facing chat. Doctors surveyed in January agree: AI as a background clinical tool is useful; AI as a medical authority patients talk to directly is another matter. A 16% reduction in diagnostic errors in one study suggests the stakes are real either way.
Read full story →The Robotaxi Is Here. Sort Of.
Motional joined Uber's app in Las Vegas this week — safety driver still aboard, a handful of geofenced zones, a long way from the autonomous commute that was promised. But alongside Uber's pivot to AV platform, Pony AI and Toyota's commercial production ramp, and 103 cities globally with AV deployments, the shape of what self-driving actually becomes is finally becoming visible.
Read full story →Every Major AI Lab Has an Education Strategy. That's New.
Google is training all 6 million US educators on Gemini. OpenAI is cutting government deals to deploy AI across national school systems. Anthropic partnered with Teach For All to reach classrooms in 60 countries. Three labs, three different angles on the same strategic bet: shape how the next generation of workers and regulators thinks about AI before someone else does.
Read full story →AI Is Creating a Memory Chip Shortage. Your Next Phone Will Pay for It.
HBM memory for AI accelerators is sold out, and the fabs making it are diverting capacity away from the standard chips in your phone and laptop. Bloomberg finds consumer electronics prices rising as a direct result. It's the same pattern as the energy story: the costs of the AI infrastructure buildout are showing up in places ordinary consumers didn't expect.
Read full story →The AI Cyber Arms Race Is Already Here
Microsoft documented how threat actors now use AI across the full attack lifecycle. Anthropic disclosed disrupting what it called the first AI-orchestrated espionage campaign. An Armadin CEO told CNBC virtually all cyberattacks will soon be AI-driven. The offensive capability is real and accelerating — and AI-powered defence is racing to keep up from a structural disadvantage.
Read full story →Everyone Has an AGI Timeline. None of Them Agree.
Amodei says AGI-level AI could be here this year. Altman has OpenAI tracking toward a full AI researcher by 2028. Hassabis says five to ten years, and "no one really knows." Scientific American examines what current models can and can't actually do. The Washington Post argues the doomsday framing has quietly collapsed. All five pieces land differently — which is the point.
Read full story →Stargate Is Building the Future. It Just Might Be Building Last Year's Future.
OpenAI's $500B Stargate project has nearly 7 gigawatts of planned capacity and is already operational in Texas. But CNBC asks the pointed question: is Oracle building yesterday's data centres with tomorrow's debt? OpenAI has already pulled back from its Abilene Oracle partnership because it wants facilities built for newer chip generations — an admission that the construction cycle and the chip cycle aren't aligned.
Read full story →AI Is Democratising Creativity. Creatives Are Not Sure How to Feel About It.
Google shipped Lyria 3 music generation into Gemini and gave filmmakers access to Veo 3. The results were good. The reaction from working creatives was more mixed — faster and cheaper, yes, but also lonelier, and built on training data whose legal status remains unresolved. The split between who benefits and who gets displaced depends entirely on which part of the market you're in.
Read full story →Nvidia Got Permission to Sell Chips to China. China Won't Buy Them.
The Trump administration approved H200 exports to China with a 25% revenue cut attached. Beijing responded by blocking the imports anyway. So Nvidia has permission to sell, and no buyers. Meanwhile, the US is now considering even tighter controls that would require approval to ship AI chips almost anywhere outside the country — a move that would affect allies too.
Read full story →Twelve Percent of Teens Use AI for Emotional Support. The Research Is Worried.
An OpenAI-funded study found heavy ChatGPT use correlates with loneliness. A separate paper found AI companion use erodes real-life social skills. Instagram is now alerting parents to self-harm searches. The kids turning to chatbots most are often those with fewest other options — which makes the "just don't use it" advice both correct and useless.
Read full story →