Google is in talks with the Pentagon to deploy its Gemini AI models in classified military environments, according to reporting confirmed by CNBC on April 16. The proposed deal would allow the Department of Defense to use Gemini for what the Pentagon terms "all lawful uses," with contractual restrictions preventing its deployment in fully autonomous weapons systems or programs for domestic mass surveillance without appropriate human oversight.
The news marks a significant escalation in Google's military AI relationship. In March, Google had already begun rolling out Gemini-powered agents to the Pentagon's three million-strong workforce via the GenAI.mil enterprise portal, initially for unclassified administrative tasks. Moving into classified and top-secret environments is a different order of commitment, both technically and politically.
The proposed guardrails are contractual rather than technical. Google would not embed restrictions directly into the model's behavior; instead, the company would rely on contract language that the Pentagon agrees to uphold. Critics inside Google have noted this is a weaker mechanism than Anthropic's approach, which had sought to make certain uses architecturally impossible within Claude itself. That distinction was central to the dispute that ended with Anthropic being designated a supply chain risk to national security in February.
The classified deployment talks have triggered a fresh wave of internal dissent at Google. An open letter titled "We Will Not Be Divided" had gathered nearly 900 signatures by publication time, with roughly 800 coming from Google employees and close to 100 from OpenAI staff. Signatories said they were concerned the deal could mirror the arrangement already in place for Grok, xAI's model, which CNBC previously reported has classified network access with no disclosed guardrails at all.
The Pentagon's position is consistent with the stance it has held since the Anthropic dispute: that it should be able to deploy frontier AI for any lawful purpose, and that private contractors should not set policy on national security applications. A DoD spokesperson declined to confirm the talks with Google specifically, but said the department would "continue to deploy frontier AI capabilities through strong industry partnerships across all classification levels."
What makes the current situation striking is how rapidly the landscape has shifted. In February, Anthropic was the Pentagon's chosen AI partner, blacklisted after refusing to remove model-level safety restrictions. Within weeks, OpenAI and Google stepped in. Now Google is moving from unclassified to classified work, and the competitive logic is pulling toward accommodation: the contracts go to whoever is willing to operate within the DoD's terms.
The open question, as employees on both sides of the internal debate have noted, is whether contractual guardrails carry real weight in classified environments where enforcement is left to the government itself. When the contractor is also the regulator, the restriction is only as strong as the government's willingness to apply it against its own operations. Anthropic, for its part, argued that technical limits were the only kind that could actually hold. The Pentagon disagreed, and the market has moved accordingly.