Big Tech Wants To Build Data Centers In Space: Does This Make Sense?
Based on Sabine Hossenfelder's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Sun-synchronous orbits can provide near-continuous solar power, but thermal management still depends on radiative heat transfer and radiator sizing.
Briefing
Big Tech’s push for data centers in orbit rests on a simple promise—solar power, no cooling infrastructure, and global connectivity—but the physics of heat removal and radiation exposure make the “space is cold, so it’s easy” pitch far less straightforward than it sounds. Sun-synchronous orbits keep satellites in near-constant sunlight, enabling power generation without the same energy constraints that drive terrestrial data-center design. Add the ability to beam data across the globe and avoid land-use battles, and space-based computing starts to look like a clean engineering workaround.
The catch is that “cold space” doesn’t automatically solve thermal management. On Earth, processors are cooled largely by moving air to carry heat away; in orbit there’s no air, so convection fails. Cooling then depends on conduction into heat-spreading hardware and radiative cooling—emitting heat as infrared light. Radiative cooling works, but it typically requires large radiator surfaces to be efficient, which undermines some of the mass and cost advantages people associate with space. Another proposed approach circulates a refrigerant (such as ammonia) through the system and then radiates the heat away, but that still adds complexity and weight.
Radiation is the second major limiter. Cosmic rays and solar wind bombard satellites with energetic particles that can permanently damage microchips and also cause processing errors. Low Earth orbit helps reduce exposure compared with higher orbits, and mitigation strategies include shielding and using radiation-hardened chips. Those chips are built with thicker insulation and larger transistors to improve durability, but that design trade-off usually slows performance.
Even if hardware survives and heat can be managed, space computing only makes sense for certain workloads. Processing data already collected by satellites can be a strong fit because satellite-to-Earth links are far slower than modern terrestrial networks. Earth-based data centers can process at speeds above a terabit per second, while top satellite links are around 1 gigabit per second—roughly 1,000 times slower. That gap makes it efficient to filter, compress, or analyze data in orbit before sending results down.
By contrast, tasks that require huge amounts of data from the ground—especially AI training—don’t work well without major improvements in bandwidth. Training an AI model in orbit would demand far faster satellite-to-Earth communication than current systems provide.
Several efforts are already underway. In the US, Starcloud launched a satellite carrying an NVIDIA H100 chip, with its CEO Philip Johnston predicting that most new data centers will be built in outer space within a decade. Axiom Space is pursuing orbital data nodes aimed at government needs, including secure handling and defense applications. Google’s Project Suncatcher is the most ambitious: it plans to connect dozens of satellites using ultra-high-speed laser links, targeting up to 10 terabits per second, with prototype satellites planned for early 2027 and a broader vision by 2030—when data storage and retrieval could be as simple as “somewhere above Madagascar.”
Cornell Notes
Space-based data centers are attractive because sun-synchronous orbits can provide power continuously and satellites can beam data globally, while cold space reduces some cooling burdens. But orbit changes the thermal equation: convection cooling doesn’t work without air, and radiative cooling needs large radiator surfaces; circulating refrigerants adds complexity. Radiation is another hard constraint—cosmic rays and solar wind can damage chips and cause computation errors, so systems rely on low Earth orbit, shielding, and radiation-hardened components that often run slower. Space computing is most compelling for processing satellite-collected data before downlink, since satellite-to-Earth links (about 1 Gbps) are far slower than terrestrial networks (terabits per second). Large-scale AI training in orbit would require major bandwidth upgrades.
Why does “cold space” not automatically make satellite data centers easier to cool?
How do cosmic radiation and solar wind threaten computing in space?
When does computing in orbit make economic and technical sense?
Why is AI training in orbit harder than AI inference or pre-processing?
What concrete projects illustrate different approaches to space computing?
Review Questions
- What are the three main cooling mechanisms, and why do convection-based approaches fail in space?
- How do bandwidth differences between satellite links and terrestrial networks shape which workloads belong in orbit?
- What trade-offs come with radiation-hardened chips, and how do shielding and orbit choice reduce radiation risk?
Key Points
- 1
Sun-synchronous orbits can provide near-continuous solar power, but thermal management still depends on radiative heat transfer and radiator sizing.
- 2
Convection cooling doesn’t work in space because there’s no air; cooling relies on conduction plus radiation or on fluid circulation to radiators.
- 3
Cosmic radiation and solar wind can both damage microchips permanently and cause computation errors, requiring shielding and radiation-hardened components.
- 4
Low Earth orbit reduces radiation exposure compared with higher orbits, improving the feasibility of space-based computing hardware.
- 5
Orbit-based processing is most practical for satellite data pre-processing because satellite-to-Earth links are far slower than terrestrial networks.
- 6
AI training in orbit is constrained by the need for much higher satellite-to-Earth bandwidth than current systems provide.
- 7
Multiple initiatives—Starcloud, Axiom Space, and Google’s Project Suncatcher—pursue different architectures, from onboard AI chips to laser-linked satellite clusters.