So today Anthropic announced that Claude is, in some indirect but very real sense, going to space.
The actual headline is more grounded. Anthropic signed a compute deal with SpaceX to take over the entire Colossus 1 data center in Memphis. That's over 300 megawatts and more than 220,000 NVIDIA GPUs coming online for them within the month. Same post raised Claude Code rate limits, removed the peak-hour throttling on Pro and Max, and bumped the Opus API limits considerably. Good news if you, like me, have been hitting that "you've reached your usage limit" wall mid-refactor at 11pm.
But buried in the same post is the part everyone is actually going to be talking about for the next week:
As part of this agreement, we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.
Multiple gigawatts. In orbit. Which is a wonderful sentence to drop on a Wednesday.
Yes, Your Code Might Literally Be Running In Space
Let's get the jokes out of the way, because we have to.
Future stack traces are going to be incredible. NullReferenceException at line 47, somewhere over the Pacific, currently at 547km altitude. Latency complaints will get a whole new dimension. "Hey, the API is slow today." Yeah, the satellite carrying your inference workload is on the night side of the planet and the solar panels are doing their best.
I'm imagining a junior dev in 2030 running kubectl get pods and getting back orbital coordinates. Helm charts with orbital inclination as a config value. Kubernetes scheduler trying to decide whether to run your workload on a node in low Earth orbit or one that's currently being eclipsed. Geographically distributed becomes orbitally distributed, and your DR site is in a Molniya orbit because you wanted "high availability."
Okay. Joke quota met. Now the serious part.
Why Space Actually Makes Sense (And Why That's Uncomfortable)
The pitch for orbital data centers isn't fundamentally crazy. SpaceX has filed FCC requests for orbital data center infrastructure and the physics arguments are real:
- Solar power, basically unlimited. A satellite in the right orbit gets sunlight nearly continuously. No batteries needed at the same scale.
- Cooling for free. Space is, famously, very cold. Radiating heat away is the part of cooling that actually gets harder in orbit, not easier, but the ambient sink is basically infinite.
- No NIMBY. Nobody lives in low Earth orbit. Nobody is going to file a noise complaint about a 100MW compute cluster overhead.
That last one is the quiet part being said louder. Look at what happened in Memphis. The Colossus facility Anthropic just signed for was built and operated using dozens of natural gas turbines that worsened local air pollution, reportedly without a federal permit because xAI claimed they were "temporary." The turbines triggered persistent protests. The IEA's latest data shows global data center electricity use has roughly doubled since 2022 and is now around 1.5% of total global electricity demand and rising fast, with AI workloads being the main driver of new capacity.
So when an AI company says "let's just put it in orbit," part of what they're saying, whether they mean to or not, is let's stop having this fight on the ground.
The Environmental Math Is Not Obvious
Here's where the easy narrative gets messy.
On one hand, yes, ground-based AI infrastructure has real environmental costs. Power generation, water for cooling, emissions, grid strain, local air quality. Anthropic itself has committed to covering consumer electricity price increases caused by its US data centers, which is genuinely a thing nobody else is doing, and which also tells you exactly how big the impact on local grids has become.
On the other hand, getting things to space is not free.
A single Falcon 9 launch produces roughly 336 metric tons of CO2 equivalent according to Nature's atmospheric research, and Starship is much bigger. Multiply that by the launch cadence required to put gigawatts of compute infrastructure in orbit. Then add the lifetime of the satellites, the deorbit burn, the aluminum oxide particles released when they burn up on reentry which are increasingly being studied as a stratospheric pollutant, and the orbital debris implications of putting hundreds of thousands of satellite-data-centers up there.
Solar power in orbit is genuinely clean. Getting the hardware to orbit is genuinely not. And we don't yet have honest lifecycle numbers for what a 1 GW orbital compute facility actually costs the planet versus the same gigawatt sitting in West Virginia powered by a mix of nuclear and gas.
My gut says it's probably better than coal. Probably worse than nuclear on the ground. But I would love to see the actual paper before anyone declares this the green AI revolution.
How Far Away Is "Soon"?
Honestly? Closer than we'd think.
SpaceX is filing for an IPO in June and orbital data centers are part of the pitch deck. Musk said the same day that xAI will be folded into SpaceX and renamed SpaceXAI. Starlink already proves you can deploy thousands of satellites and run a working business on them. The hard parts left are heat dissipation at scale, radiation hardening for high-density GPUs, and the cost per kilowatt delivered to orbit. None of those are fundamental physics problems. They're engineering and money problems.
Given how fast this industry moves now, I would not bet against a real demonstrator being on orbit before 2030. A demonstrator is not a production data center. But the gap between "we have a thing in space that ran an inference workload" and "you can rent it on a price list" turned out, with Starlink, to be about four years.
The funny part isn't that code might run in space. The funny part is that this is the path of least political resistance.
What This Actually Means For The Rest Of Us
Three things I'd take away.
Compute is the bottleneck now, not models. Anthropic raised their limits today only because they bought more capacity. Not because the models got cheaper. Every release for the next 18 months is going to be bottlenecked by physical infrastructure and that means deals like this one, plus the 5GW with Amazon, the 5GW with Google and Broadcom, the $30B Microsoft and NVIDIA Azure commitment, the $50B Fluidstack investment. The frontier labs are now also infrastructure companies whether they want to be or not.
The environmental conversation is going to get harder, not easier. "Just put it in space" sounds like a solution. It might be. It might also be a way to externalise emissions across launches and reentries while marketing the resulting compute as solar-powered. The honest answer requires data we don't have yet.
The space-coder jokes are funny, but the underlying shift is real. When the unit economics of putting a GPU in orbit beat the unit economics of fighting a community over gas turbines in Tennessee, you'll see the GPU go up. And the engineers who learn to think about latency budgets, radiation, and orbital scheduling early are going to be the ones building this stuff.
I'm going to keep using my newly-doubled Claude Code limits to ship features tonight. But I'd be lying if I said I wasn't also looking up at the sky a little differently.