← Front Page
AI Daily
Split scene: Google and military officials sign a classified deal while tech workers protest outside holding signs reading Not In Our Name
Security • April 28, 2026

Google Hands the Pentagon Its AI With No Enforceable Limits

By AI Daily Editorial • April 29, 2026

Since September 2025, the US military has conducted at least 47 strikes on vessels in the Caribbean under what it describes as lawful counter-narcotics operations. As of March 2026, at least 163 people have been killed. The Guardian reported that governments and families of the dead "said many of those killed were civilians, primarily fishers." In the first strike, two survivors were left clinging to wreckage. They were killed in a follow-up strike. The Trump administration describes all of it as lawful. NPR asked directly whether it constituted a war crime. No charges have been brought.

Google has now signed a classified deal granting the US Department of Defense access to its AI for what Defense Secretary Pete Hegseth has consistently demanded: "any lawful purpose." The contract, reported by the Wall Street Journal, includes language stating that Google's AI should not be used for domestic mass surveillance or fully autonomous weapons without appropriate human oversight. The contract also explicitly states that it does not give Google "any right to control or veto lawful government operational decision-making." The restrictions are a statement of preference. They are not enforceable. "Any lawful purpose" is the operative term, and the Pentagon decides what is lawful.

This is the same deal that Anthropic refused. Anthropic insisted on binding guardrails preventing its AI from being used for fully autonomous weapons and domestic mass surveillance of Americans. Hegseth's position was that the government should not be constrained by conditions set by a private contractor. When Anthropic held its line, the Pentagon designated it a Supply Chain Risk to National Security — a classification previously reserved for foreign adversaries. Anthropic sued. A federal judge granted an injunction. That case continues. Google, faced with the same choice, signed.

Inside Google, nearly 950 employees — including more than 20 directors and senior staff who attached their names openly — sent a letter to CEO Sundar Pichai before the signing. The letter stated: "We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance, but extends beyond. We believe that Google should not be in the business of war." One organiser described the difficulty from inside: "Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny." Google did not respond to the letter publicly. It signed the deal.

The protest echoes 2018, when more than 4,000 Google employees signed a letter opposing Project Maven, a Pentagon drone-analysis contract. Google cancelled that contract in response. The outcome this time is different, and the scale of what is being offered is not comparable. Project Maven was a specific computer vision contract for drone footage analysis. This deal covers Google's AI systems broadly, for any purpose the Pentagon designates as lawful. The employees who protested Project Maven understood what they were objecting to. The employees protesting now are objecting to something with no defined perimeter.

OpenAI and xAI had already accepted "any lawful purpose" terms before Google's agreement. OpenAI stated three voluntary red lines — no mass surveillance, no autonomous weapons targeting, no use against US citizens — framed as public commitments rather than contract conditions. They carry the same enforceability as Google's contract language: none. Anthropic CEO Dario Amodei said of AI's potential misuse: "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." His company is currently in federal court fighting a designation that suggests the administration views that position as a threat to national security rather than a principled stance.

The word "lawful" is not a restriction. It is a delegation. It hands the definition of acceptable use to an institution whose record of what it considers lawful includes mass warrantless surveillance of US citizens, drone strikes on wedding parties, and two survivors of a Caribbean vessel strike killed in a follow-up while still in the water. Each of those actions was authorised under existing legal frameworks at the time. By signing a contract that grants access for "any lawful purpose" while explicitly waiving any right to veto operational decisions, Google has not imposed a limit on how its AI may be used. It has handed that question entirely to the Pentagon, and the Pentagon has already answered it.

Sources