Google expands Pentagon’s access to its AI after Anthropic’s refusal

Google expands Pentagon’s access to its AI after Anthropic’s refusal

```json { "title": "Google Signs Classified Pentagon AI Deal After Anthropic's Refusal", "metaDescription": "Google signed a classified AI deal with the Pentagon allowing Gemini models for 'any lawful government purpose,' after Anthropic refused to drop ethical restrictions.", "content": "<h2>Google Expands Pentagon AI Access as Anthropic Dispute Reshapes Military AI Landscape</h2><p>Google has signed a classified agreement with the U.S. Department of Defense granting the Pentagon access to its Gemini AI models for use on classified government data — a significant escalation of its existing defense relationship that comes directly in the wake of Anthropic's refusal to remove contractual restrictions on mass domestic surveillance and fully autonomous weapons. The deal, first reported by The Information on April 28, 2026, and confirmed by multiple outlets including 9to5Google, Yahoo Finance, and The Next Web, permits the Pentagon to deploy Gemini for what the contract terms 'any lawful government purpose.'</p><p>A representative from Google Public Sector confirmed the arrangement is an extension of an existing contract between Google and the Department of Defense — one that had previously been limited to unclassified government data. The new agreement now brings Google into the classified tier of military AI infrastructure, alongside OpenAI and xAI, which secured their own Pentagon AI agreements earlier in 2026.</p><h2>What the Google–Pentagon Deal Actually Says</h2><p>The terms of the agreement, as reported by 9to5Google, go beyond simply unlocking classified environments. The contract stipulates that Google will "assist in adjusting its AI safety settings and filters at the government's request." Crucially, the deal also explicitly states that it does not grant Google "any right to control or veto lawful government operational decision-making." In practical terms, this means the Pentagon retains full operational authority over how Gemini models are deployed in classified contexts, and Google has agreed in advance to modify its own safety architecture at the government's direction.</p><p>The Next Web reported that this structure stands in deliberate contrast to the deal Anthropic had negotiated with the DoD. Where Anthropic's contract included explicit prohibitions on the use of its AI for mass domestic surveillance and for fully autonomous weapons systems operating without human oversight, Google's agreement contains no such carve-outs. According to Dataconomy, Google's contract appears to provide the least restriction among current Pentagon AI partners, and aligns with the Trump administration's publicly stated preference for unregulated military AI contracts.</p><p>Google aims to add $6 billion in contract value by 2027 through its expanded defense and government sector activities, according to TradingKey. The classified Pentagon deal represents a key step in that strategy, particularly as competition intensifies with Amazon AWS and Microsoft for defense cloud and AI contracts. The DoD previously awarded the $9 billion JWCC cloud contract to Amazon, Google, Microsoft, and Oracle in 2022.</p><h2>How Anthropic's Refusal Opened the Door</h2><p>To understand why Google's deal landed when it did, it is necessary to trace the months-long dispute between Anthropic and the Pentagon. In July 2025, the Department of Defense awarded contracts of up to $200 million each to Anthropic, Google, OpenAI, and xAI to accelerate AI adoption for national security purposes, according to the Congressional Research Service. Anthropic's $200 million contract was signed, but negotiations broke down in September 2025 when the DoD sought to deploy Anthropic's Claude model on its GenAI.mil platform without the ethical restrictions Anthropic had written into the agreement.</p><p>Anthropic held its position. In a published corporate statement dated February 26, 2026, the company said that <strong>"today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,"</strong> and declared it <strong>"will not knowingly provide a product that puts America's warfighters and civilians at risk."</strong></p><p>The response from the Trump administration was swift and severe. On February 27, 2026, President Donald J. Trump directed all federal agencies to immediately cease using Anthropic's technology. In a post on Truth Social, Trump wrote: <strong>"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War."</strong> The Secretary of Defense subsequently designated Anthropic a supply-chain risk to national security — a designation the Congressional Research Service notes has historically been reserved for foreign adversaries. According to Wikipedia, Anthropic is the first American company to receive this designation.</p><p>Anthropic filed a legal challenge. In proceedings before the U.S. District Court for the Northern District of California, Judge Rita Lin remarked from the bench: <strong>"It looks like an attempt to cripple Anthropic."</strong> However, the legal challenge did not succeed at the appellate level. On April 8, 2026, the federal appeals court in Washington, D.C. denied Anthropic's request to temporarily block the Pentagon's blacklisting, with the court stating that granting relief <strong>"would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict."</strong></p><p>Hours after Trump's February 27 ban on Anthropic, OpenAI announced it had struck its own deal with the Department of Defense to provide AI technology for classified networks, according to NPR. With Anthropic sidelined and OpenAI and xAI already in position, Google's new classified agreement completes a notable consolidation of Pentagon AI partnerships around companies willing to operate without the ethical constraints Anthropic insisted upon.</p><h2>History Repeating: Google Employees Push Back</h2><p>The announcement lands at a moment of significant internal tension at Google. On Monday, April 27, 2026 — the day before the deal was publicly reported — more than 600 Google employees, including over 20 principals, directors, and vice presidents, signed an open letter to CEO Sundar Pichai urging him to reject any classified military AI arrangements. The letter was direct in its warning about what classified military AI work could mean in practice.</p><p><strong>"Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads,"</strong> the signatories wrote, warning that the technology could be used in "inhumane or extremely harmful ways."</p><p>The employee protest carries unmistakable historical echoes. In 2018, the DoD's use of Google AI for Project Maven — a program that used machine learning to analyze drone surveillance footage — triggered a wave of employee protests and mass resignations at the company, ultimately leading Google to withdraw from the project entirely. The 2018 episode was widely seen as a defining moment for the ethics of AI in defense contexts. The reemergence of similar internal dissent in 2026, now involving hundreds of senior-level employees, suggests the cultural and ethical tensions inside Google around military AI have not been resolved — they have been deferred.</p><h2>Why This Matters Beyond the Beltway</h2><p>The Google–Pentagon classified AI deal is significant not only as a business or policy story, but as a marker of where the lines are currently being drawn — and erased — around AI safety in high-stakes environments. The Anthropic dispute established a precedent: a company that refused to remove ethical guardrails from its AI was designated a national security risk, blacklisted by executive order, denied relief by a federal appeals court, and replaced by competitors willing to offer fewer restrictions. Google's agreement, which explicitly permits safety filter adjustments at government request and forecloses Google's ability to veto lawful operational decisions, represents the practical outcome of that precedent.</p><p>For the broader AI industry, the structure of Google's deal sets a visible benchmark. Where Anthropic drew contractual lines around autonomous weapons and domestic surveillance, Google's contract language draws none. Whether other AI companies will face pressure to match those terms — or whether Anthropic's legal challenge will eventually produce different outcomes — remains to be seen. What is clear is that the Trump administration has demonstrated both the willingness and the legal tools to penalize companies that resist its vision of unrestricted military AI deployment.</p><p>The deal also underscores a structural reality of the current AI market: the Pentagon's contracts, worth up to $200 million per company, represent substantial revenue in an industry where infrastructure costs are enormous and commercial AI monetization remains competitive. Google's target of adding $6 billion in defense contract value by 2027 reflects how central government and military contracts have become to the financial logic of major AI firms — and how that financial logic can intersect with, and potentially override, internal ethical commitments.</p><h2>What Comes Next</h2><p>Anthropic's legal battle continues, though its path has narrowed significantly following the appellate court's April 8 ruling. The company remains designated as a supply-chain risk, and its ability to operate in the federal market is constrained for as long as that designation holds. Whether its civil lawsuit produces a different outcome at the district court level, or whether the political dynamics shift, will shape whether the ethical framework Anthropic championed has any future in U.S. defense AI procurement.</p><p>For Google, the immediate question is how the company responds to the internal dissent from its own workforce. The 600-plus employee letter, which includes senior principals and vice presidents, is not a fringe protest — it represents a meaningful cross-section of the company's technical and leadership ranks. Whether Pichai addresses the letter publicly, and what Google says about how it intends to govern Gemini's use in classified military contexts, will be closely watched.</p><p>For the Pentagon, the consolidation of AI partnerships around Google, OpenAI, and xAI — all operating under contracts without the restrictions Anthropic required — means the DoD now has broader, more flexible access to frontier AI models for classified use than at any prior point. How that access is exercised, and under what oversight structures, remains an open question that neither the contracts nor the public record fully answers.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p><h2>AI, Decision-Making, and What It Means for You</h2><p>The rapid integration of AI into high-stakes government and military decision-making is a reminder of how profoundly AI systems are reshaping the environments in which all of us live and work. At Moccet, we believe that understanding the forces shaping AI — who controls it, under what rules, and toward what ends — is part of staying genuinely informed in a world where these systems increasingly affect health, productivity, and daily life. Staying ahead of these developments isn't just for policymakers. It's for anyone who wants to make smarter decisions about the tools and platforms they rely on. <a href=\"/#waitlist\">Join the Moccet waitlist to stay ahead of the curve.</a></p>", "excerpt": "Google has signed a classified agreement with the U.S. Department of Defense allowing the Pentagon to deploy Gemini AI models for 'any lawful government purpose,' including on classified data — a deal that comes after Anthropic was blacklisted for refusing to drop ethical restrictions on autonomous weapons and domestic surveillance. The agreement requires Google to adjust its AI safety settings at the government's request, and explicitly removes Google's ability to veto lawful operational decisions. More than 600 Google employees, including senior principals and VPs, signed a letter urging CEO Sundar Pichai to reject the deal just one day before it was publicly reported.", "keywords": ["Google Pentagon AI deal", "Gemini classified military AI", "Anthropic DoD dispute", "Pentagon AI contract", "military artificial intelligence 2026"], "slug": "google-pentagon-classified-ai-deal-anthropic-refusal-2026" } ```

Share:
← Back to Tech News