Pentagon's Defense AI Pacts Freeze Out Anthropic — While NSA Uses Its Flagship Model Anyway

On May 1, the U.S. Department of Defense announced agreements with eight AI companies — OpenAI, Google, xAI, Microsoft, NVIDIA, AWS, Oracle, and Reflection AI — authorizing their models inside classified networks under one key condition: consent to "any lawful purpose" use. Anthropic was absent. The company had been pushing for explicit ethical guardrails before signing; its competitors signed without them.

What the Source Actually Says

According to @yaelkroy's analysis of the DoD announcement, some of the eight contractors still received informal assurances about non-lethal use and surveillance limits — essentially the same protections Anthropic had been publicly demanding. The company was nonetheless left outside the deal, now facing lawsuits and lost government revenue while competitors secured their place in classified infrastructure.

The governance irony deepens with the NSA angle. Axios reported in April that the NSA continued using Claude Mythos Preview for vulnerability discovery even while Anthropic carried a formal supply-chain risk status — a federal designation intended to halt agency engagement with the company. The reason is strategically obvious: a model capable of finding software vulnerabilities is too operationally valuable to sideline, regardless of its regulatory standing.

The final picture splits into three tiers. OpenAI and Google will shape how frontier models operate inside classified systems; AWS, Microsoft, Oracle, and NVIDIA provide the infrastructure layer; xAI and Reflection AI gain a new defense-sector foothold. Anthropic holds the moral high ground, an ongoing legal fight, and no contract. Migration away from Anthropic's models is reportedly incomplete — some analysts remain dependent on older Claude versions because the replacement stack is not fully operational.

Strategic Take

This episode formally ends the era in which AI companies could credibly claim they would stay away from defense. The lasting commercial lesson: ethical guardrail demands carry real revenue cost when competitors are willing to sign without them. Any AI company considering similar positions should price in that trade-off explicitly.