The Pentagon signed AI deals with SpaceX, OpenAI, Google, Microsoft, Amazon, Nvidia, and one startup. Anthropic was not invited. The US military is building an AI-first fighting force.
On 1 May 2026, the US Department of Defense announced classified AI agreements with seven companies: SpaceX, OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, and startup Reflection AI. Their AI systems will be integrated into the Pentagon’s most classified network environments — Impact Level 6 and Impact Level 7. These networks handle the most sensitive military intelligence, warfighter decision-making, and operational planning. The announcement came just ten days after the Pentagon unveiled a $1.5 trillion (€1.38 trillion) FY2027 budget request — the largest in US history — with nearly $54.6 billion (€50.3 billion) earmarked for autonomous warfare systems alone.
One name is notably absent from the seven partners: Anthropic. The Pentagon designated Anthropic a “supply chain risk” in February 2026, after the AI safety company refused to remove restrictions on autonomous weapons use and mass domestic surveillance from its military contract terms. The Trump administration subsequently ordered government departments to stop using Claude. The 1 May announcement effectively formalises that exclusion — and signals which companies the administration trusts with its most sensitive AI infrastructure.
What’s Happening & Why It Matters
The Seven Partners and What They Will Do
The Pentagon’s statement describes the seven partnerships as part of a deliberate strategy to “build an architecture that prevents AI vendor lock and ensures long-term flexibility for the Joint Force.” That view — diversity of AI vendors rather than single-vendor dependency — reflects lessons learned from previous defence technology programmes, where over-reliance on a single contractor created vulnerabilities. The seven companies’ AI tools will be used for three stated purposes: streamlining data synthesis, elevating situational understanding, and augmenting warfighter decision-making in complex operational environments.

Five of the seven partners — OpenAI, Google, SpaceX, Microsoft, and AWS — already held existing Pentagon AI contracts before 1 May. The announcement formalises their access to Impact Level 6 and Level 7 environments specifically. Nvidia and Reflection AI are new additions. Reflection AI is a relatively unknown startup — its inclusion alongside the industry’s largest AI infrastructure companies signals the Pentagon’s interest in testing emerging capabilities alongside proven ones.
What Impact Level 6 and Level 7 Mean
Not all classified networks are equal. The Department of Defense‘s Impact Level classification system describes the sensitivity of information each network handles. Impact Level 4 covers controlled unclassified information. Impact Level 5 covers national security systems with moderate sensitivity. I-Level 6 covers classified national security systems. I-Level 7 covers the DoD’s most sensitive operational environments — the networks used for mission planning, weapons targeting, intelligence analysis, and real-time battlefield command. Granting AI systems access to these environments is not a pilot programme. It is operational integration at the highest available classification tier.
The Pentagon statement described the goal directly. “These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters’ ability to maintain decision superiority across all domains of warfare.”
The Anthropic Exclusion: Principle vs Revenue
The deliberate exclusion of Anthropic from the 1 May announcement carries significant competitive and geopolitical implications. Anthropic products — particularly Claude — are widely deployed across the Department of Defense and its contractors. The February 2026 supply chain risk designation effectively blocked that usage. CEO Dario Amodei has been explicit about his position. He wrote publicly that he “cannot in good conscience accede to the Pentagon’s request” to remove restrictions on autonomous lethal weapons and mass domestic surveillance.

That principled position comes at a high commercial cost. CNN noted that signing so many of Anthropic’s direct competitors could give the Trump administration additional leverage over the company. The money in question is substantial. The One Big Beautiful Bill Act — passed last year — included a large sum for the Pentagon to spend specifically on AI and offensive cyber operations. Tech companies have been competing actively for that budget. Anthropic is locked out of it. By contrast, OpenAI — which has historically aligned itself more closely with administration priorities — is in. So is Google, which signed its own classified military AI deal on 27 April, over the explicit objections of more than 600 of its own employees.
The FY2027 Budget: AI as the New Nuclear
The 1 May partnership announcements arrive amid the most aggressive military AI spending proposal in US history. President Trump submitted the FY2027 defence budget request to Congress on 21 April 2026. The total is $1.5 trillion (€1.38 trillion) — a 42% year-on-year increase and the largest proposed US military budget since World War II. Acting Comptroller Jules Hurst III described the strategic context plainly. “We’re facing one of the most complex and dangerous threat environments in our nation’s 250-year history. Our adversaries are rapidly advancing capabilities across every warfighting domain.”
The autonomous warfare component dominates the new spending. The Defence Autonomous Warfare Group (DAWG) — a unit under US Special Operations Command — receives $54.6 billion (€50.3 billion) in the FY2027 proposal. That is a rise of more than 24,000% from its $225.9 million budget in FY2026. The counter-drone and anti-drone package, combined with the DAWG allocation, pushes the total autonomous systems budget to approximately $75 billion (€69.1 billion). An additional $30 billion (€27.6 billion) covers Precision Strike Missiles and Mid-Range Capability munitions. The Golden Dome missile defence initiative receives substantial separate funding.
AI in Intelligence and Decision Support

The Pentagon’s seven classified AI partnerships is within a pattern of AI integration across intelligence and decision-support functions. Palantir — which holds existing contracts not mentioned in the 1 May announcement — has been the dominant supplier for battlefield data aggregation and intelligence fusion. SpaceX‘s inclusion is notable in a different dimension. xAI — a SpaceX subsidiary following the February 2026 merger — brings the Grok AI model into the partnership. Grok has not previously been associated with classified military deployments. Its inclusion signals that SpaceX‘s political alignment with the Trump administration has translated directly into access at the highest levels of defence infrastructure.
OpenAI‘s involvement also carries a specific context. CEO Sam Altman is currently testifying in federal court in the Musk v. Altman trial — where his management of OpenAI‘s commercial transformation is under scrutiny. On the same day as the Pentagon announcement, Altman’s company is deepening its integration into the US military’s most sensitive networks. The two narratives — nonprofit mission trial and for-profit military contract — are running in parallel.
The Ethical Fault Line

The Pentagon’s 1 May statement explicitly describes deployment for “lawful operational use.” That phrase carries the same ambiguity that drove Anthropic to draw a line. Classified networks are, by definition, opaque to the AI companies supplying them. Once an AI model operates on an air-gapped Impact Level 7 network, the supplying company has no visibility into queries, outputs, or the influence of model recommendations on operational decisions. More than 600 Google employees raised exactly this concern in an open letter on 27 April. Their letter warned that the only guarantee against AI being used for mass surveillance or autonomous weapons is to “reject any classified workloads.” Google signed the deal anyway.
This is the central ethical fault line in military AI policy right now. Companies that accept classified military AI contracts are making an implicit statement: they trust the government to use their AI tools within the stated legal bounds, even though they cannot verify how that is happening. Companies that refuse — like Anthropic — are stating that they do not trust those bounds to hold without enforceable contractual restrictions. The Trump administration’s response to Anthropic‘s refusal was to designate it a supply chain risk. That response answers the question about which approach the current administration rewards.
TF Summary: What’s Next

The seven classified AI partnerships will begin integration into Impact Level 6 and Level 7 environments over the coming months. No public timeline for deployment milestones has been disclosed. The Pentagon will not comment on specific operational use cases. Congressional oversight of the FY2027 budget request — including the DAWG’s extraordinary spending increase — is ongoing. The budget requires Congressional approval before it takes effect. Several lawmakers have already questioned whether a unit that spent $225.9 million in FY2026 has the contracting infrastructure to deploy $54.6 billion in FY2027 responsibly.
For Anthropic, the path back to Pentagon contracts runs through negotiation. Specifically, whether the company and the administration can agree on a framework for military AI use that Amodei can accept. That negotiation, if it happens, will likely be the most consequential AI policy conversation of 2026. The Anthropic supply chain risk designation cuts the company off from substantial defence revenue at a moment when it is simultaneously raising billions from Google and Amazon and preparing for a potential IPO. The stakes of that negotiation — for the company, for US military AI safety standards, and for the global precedent it sets — extend well beyond any single contract.

