The SpaceX Anthropic Colossus deal is one of the most contradictory business arrangements in Silicon Valley history. On 6 May 2026, Anthropic and SpaceX announced that Anthropic will use the full compute capacity of SpaceX‘s Colossus 1 data centre in Memphis, Tennessee. The facility contains more than 220,000 NVIDIA GPUs — including H100, H200, and next-generation GB200 accelerators — and delivers over 300 megawatts of compute power. Within a month of signing, Anthropic doubled Claude Code’s five-hour rate limits, removed peak-hour restrictions for Pro and Max subscribers, and significantly raised API limits. The infrastructure that Elon Musk built to train Grok is now running the company he publicly labelled a “threat to Western civilisation.”
What’s Happening & Why It Matters
From “Evil” to Infrastructure Partner in Six Months

The SpaceX Anthropic Colossus deal requires context to appreciate fully. In late 2025 and early 2026, Musk regularly attacked Anthropic. He called the company “misanthropic and evil”, writing that “winning was never in the set of possible outcomes for Anthropic.” He characterised Claude as a product of a company that “hates Western civilisation.” Those statements are on record on X, widely circulated, and entirely unambiguous.
Then came a week of meetings. Musk spent time with senior Anthropic team members to, as he described it, “understand what they do to ensure Claude is good for humanity.” He came away satisfied. On X, he wrote: “No one set off my evil detector.” He then leased his entire supercomputer to them. The philosophical reversal is comprehensive.
Why Anthropic Needed This Deal Urgently
The SpaceX Anthropic Colossus deal was not a strategic luxury. It was an emergency response to a demand that outpaced every other infrastructure arrangement Anthropic had made. The company’s annualised revenue reached $30 billion — a three-fold increase in a single year. Much of that growth came from Claude Code — an agentic coding assistant that enterprises, including Uber and Netflix, adopted at scale. Paid subscribers were hitting rate limits within hours of starting sessions. Enterprise API customers were encountering throttling that disrupted production workflows. The usage limits had become a competitive liability.
In the previous months, Anthropic had signed infrastructure deals with Amazon Web Services (5 GW), Google (5 GW), Fluidstack ($50 billion), and CoreWeave. None of that capacity was online yet. The AWS and Google compute wasn’t expected until late 2026 or early 2027. Anthropic needed capacity immediately. Colossus 1 was available immediately. The economics overrode the ideology.
Why SpaceX Said Yes

The SpaceX Anthropic Colossus deal makes commercial sense for SpaceX, too — arguably more sense than any public framing about ethics suggests. SpaceXAI had already migrated its own training operations to the more advanced Colossus 2 cluster. Colossus 1 was functionally idle for SpaceX‘s own needs. One analyst told Fortune the deal could generate between $3 billion and $4 billion in annual revenue for SpaceX. That figure is material for a company preparing for an IPO targeting a $1.75 trillion valuation.
In that context, leaving 220,000 Nvidia GPUs generating no revenue while a competitor company is willing to pay billions for access is a straightforward business decision. Musk added a single unverified condition. He reserved the “right to reclaim the compute” if Anthropic‘s AI “engages in actions that harm humanity.” That clause did not appear in the official press release. Its contractual status is unconfirmed.
An Ideological Gap Between Claude and Grok
The SpaceX Anthropic Colossus deal is particularly striking given the explicit philosophical gap between Anthropic and xAI. Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and colleagues who left OpenAI specifically because they believed safety was not being taken seriously enough. Claude uses Constitutional AI — a training framework with embedded ethical constraints. Anthropic has repeatedly refused to strip those safety guardrails for military use, including declining Pentagon requests to enable Claude for autonomous weapons in the ongoing Iran war.
By contrast, Grok launched during Musk‘s peak involvement with the Trump administration in 2025. Its branding was explicitly anti-woke. In early 2026, Grok generated at least 1.8 million sexualised depictions of women over nine days — including imagery involving minors — triggering regulatory investigations across Europe, Asia, and the United States. EU digital affairs spokesman Thomas Regnier responded without ambiguity: “This is not spicy. This is illegal. This is appalling. This has no place in Europe.” The two companies’ products represent opposite ends of the AI safety spectrum. The same man who built one is now powering the other.
What Anthropic Announced at Code with Claude

The SpaceX Anthropic Colossus deal was announced at Anthropic‘s Code with Claude developer conference in San Francisco. Anthropic Chief Product Officer Ami Vora made the compute announcement alongside a new feature called “dreaming” — a tool that allows AI systems to review work between sessions, identify patterns, and update files that store user preferences and operational context. Dreaming launches as a research preview. It represents Anthropic‘s move toward persistent AI agents that maintain context and improve autonomously across working sessions.
Anthropic co-founder Tom Brown framed the Colossus deal on X with characteristic directness: “We’re going to need to move a lot of atoms to keep up with AI demand, and there’s nobody better at quickly moving atoms (on or off planet Earth).” Musk replied in agreement. The shared infrastructure makes competitors in the AI model market simultaneous business partners in the compute market. That dynamic is now structural — not temporary.
Computing Overrides Competition
The SpaceX Anthropic Colossus deal reflects a pattern forming across the entire AI industry. Ideological and competitive rivalries are being suspended at the infrastructure layer because the demand for compute is outpacing the supply so drastically that companies cannot afford to be selective about their suppliers. OpenAI is reportedly in discussions with Google Cloud following the dissolution of the Microsoft exclusivity deal. Meta signed a compute deal with Amazon‘s Graviton division. Anthropic is now running on SpaceX infrastructure. At the same time, Anthropic is excluded from US government AI contracts — the Pentagon designated Anthropic a “supply chain risk” after it refused to remove safety guardrails from military contracts. The company is now running on infrastructure owned by a man whose company signed those same government contracts that Anthropic declined.
TF Summary: What’s Next

Anthropic gains access to the full Colossus 1 capacity within one month of the deal closing. The immediate effect on Claude’s usage limits is already visible. Pro, Max, Team, and Enterprise subscribers see twice the Claude Code rate limits and have peak-hour restrictions removed. API rate limits for Claude Opus models are considerably higher. The longer-term infrastructure picture becomes clearer in the second half of 2026, as the AWS and Google compute commitments begin to come online. At that point, Anthropic will have access to more compute capacity than at any point in its history.
MY FORECAST: The SpaceX Anthropic Colossus deal is a two-year arrangement at most. Once Anthropic‘s commitments from AWS, Google, Fluidstack, and CoreWeave come online in late 2026 and 2027, the company will have sufficient capacity to reduce or eliminate its dependence on Colossus 1. Musk‘s “humanity clause” — reserving the right to reclaim compute if Claude harms humanity — provides a convenient exit mechanism if the relationship becomes commercially or politically inconvenient. The real story here is not the partnership itself. It is what the partnership reveals: that at this stage of the AI race, compute scarcity is more powerful than ideology, more powerful than public feuds, and more powerful than the stated values of either company. When the choice is between principle and scale, scale wins. Every time.

