• 'Building gigawatt data centers in the US is becoming increasingl

    From TechnologyDaily@1337:1/100 to All on Mon Apr 13 20:15:26 2026
    'Building gigawatt data centers in the US is becoming increasingly
    difficult': Why Orbital is taking AI infrastructure into space to solve power and cooling issues

    Date:
    Mon, 13 Apr 2026 19:05:00 +0000

    Description:
    Orbital outlines its vision for space-based AI infrastructure, using solar-powered satellites to overcome energy and cooling limits and scale compute beyond Earths grid.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter The race to build ever larger AI models has created an unexpected bottleneck. It is no longer chips that limit progress, but power. Modern data centers already consume vast amounts of electricity, and demand is rising faster than infrastructure can keep up. Permitting new facilities is becoming harder, communities are pushing back, and the cost of cooling thousands of GPUs continues to climb. The result is an energy ceiling that threatens to slow the pace of AI development. Orbital believes the solution lies far above the grid. Article continues below You may like Nvidia wants to power the next generation of data centers in space Sam Altman says plans for data centers in space are 'ridiculous' is this the start of a new war of words with Elon Musk? Musk says AI compute will move to space the timeline doesnt add up

    The company is developing AI data centers designed to operate in low Earth orbit, powered entirely by solar energy and cooled by radiating heat directly into space. Without weather, night cycles, or grid dependency, solar arrays
    in orbit can generate continuous power, while the vacuum of space provides a natural way to dissipate heat two constraints that dominate the economics of terrestrial data centers. Orbital-1 Backed by funding from a16z Speedrun, Orbital is preparing its first test mission, Orbital-1, scheduled to launch
    on a SpaceX Falcon 9 in April 2027.

    The satellite will host Nvidia -powered compute hardware and is intended to validate sustained GPU operation in orbit, test radiation resilience, and begin running AI inference workloads once initial validation is complete.

    The company chose inference over training because inference tasks can run independently across a satellite network. Unlike training clusters, which require tightly coupled GPUs operating in near-perfect synchronization, inference tasks can be distributed across many independent nodes making them better suited to satellite constellations. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me
    with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    The idea of running data centers in space may sound radical, but the
    pressures driving it are becoming increasingly real.

    I wanted to know more so I spoke to Orbital founder Euwyn Poon about the economics, engineering challenges, and practical realities of running AI infrastructure in orbit. Why is everyone suddenly looking at space as the new frontier for AI infrastructure? What has changed? (cratering cost of putting goods in orbit, the OPEX appeal, exploding AI demand, the lack of red tape in space?) A few things converged at once. Launch costs are going to collapse with Starship, going from $7,000/kg on the Falcon 9 today towards the target of $10/kg. What to read next Powering the AI data center boom: the infrastructure upgrades behind innovation The AI data centers of 2036 wont be filled with GPUs: FuriosaAIs CEO on the future of silicon SpaceX and xAI merger starts a new AI space race, but big questions remain

    Meanwhile, AI demand is skyrocketing beyond the capacity of the power grid, with US data centers using about 25 gigawatts today and growing 3 to 4 times by 2030. Building gigawatt data centers in the US is also becoming increasingly difficult.

    Communities across the US are pushing back against data center construction. Getting a gigawatt facility permitted and connected to the grid now involves
    a ton of uncertainty and risk. More specifically on the energy
    infrastructure, can you give us a sense of how much a MwH of power would cost in space as opposed to Earth-based (or perhaps how easily would it be to get solar power up there). On Earth, data center electricity runs $60 to $100 per megawatt-hour before cooling. Add the 40% thermal overhead that every
    facility carries and you land at $85 to $140 per MWh in effective energy
    cost.

    In orbit, the marginal cost of energy is zero. The sun delivers 1,361 watts per square meter in LEO, constantly, with no fuel cost, no grid fees, no utility contract.

    What you actually pay is the amortized capital cost of the solar array and launch costs. Our ultimate goal is to push amortized space power below $10
    per MWh. What could prevent rogue nations from disabling/harming/taking down these orbital data center networks in the sky la Moonraker? What sort of resilience features are being implemented to prevent that from happening? The short answer is that distributed constellations are inherently harder to attack than buildings on the ground. We are not building one big space station. We are deploying thousands of small, independent satellites spread across multiple orbital planes. Taking out a single satellite removes a fraction of a percent of total capacity.

    To meaningfully degrade the network, you would need to disable hundreds or thousands of targets simultaneously. That is an enormously expensive and conspicuous military operation.

    On the legal side, the Outer Space Treaty of 1967 establishes that no nation can claim sovereignty over space. Satellites remain under the jurisdiction of the launching country, but no nation can seize or shut down another country's satellite without committing an act of war.

    Every terrestrial data center can be raided with a warrant. Orbital infrastructure cannot. How does one deal with the extreme heat differences in space where I assume one side of the satellite will remain immensely hot and the part in the shadows, a few Kelvins above absolute zero (dissipation of compute heat but also thermal management). The thermal environment in space
    is extreme. On the sun-facing side, solar irradiance hits at 1,361 watts per square meter. On the shadow side, space is roughly 3 Kelvin. Inside the satellite, you have GPU waste heat with no air for convection.

    On Earth, you blow air over heat sinks or run water through cooling towers.
    In vacuum, none of that works. Managing this thermal environment is one of
    our core engineering efforts that we are not yet discussing publicly in detail. Given your focus on inference, how do you plan to deal with signal latency for real-time AI applications? Would latency be inversely
    proportional to the number of satellites deployed? At 550 kilometers
    altitude, the speed-of-light round trip is under 4 milliseconds. A typical
    API call to a terrestrial cloud provider already takes 20 to 100
    milliseconds. For inference workloads like chatbots, code generation, and agentic AI, users already wait hundreds of milliseconds for a response. The orbital penalty is imperceptible.

    More satellites helps, but it is not inversely proportional. For inference, these are independent, parallel requests, not a single job being split across multiple nodes. Your satellites are going to be launched using Space X's platform. Space X has already disclosed its ambitious to be a major player in YOUR field. How do you plan to tackle this David vs Goliath conundrum? This
    is a big space, and we are in different lanes. Elon merged SpaceX with xAI, and that validates the entire thesis. The biggest startup risk is that the category does not exist. SpaceX just eliminated that risk for us.

    Elon has announced he will focus on its own custom silicon and its own
    models. That creates an opening for other operators like us. Just as there
    are many hyperscalers on earth, there is room for many winners here. What happens if the satellites reach EOL? Will they be decommissioned or sent to burn in the Earth atmosphere or actually never decommissioned? What's the rough lifespan you expect to eek out from them? Every satellite deorbits and burns up in the atmosphere. Each satellite carries ion thrusters with propellant reserved specifically for the deorbit maneuver. The structure is made from materials that burn up cleanly on re-entry.

    Design life is roughly matched to the useful life of the GPU architecture on board, around 5 to 7 years. By that point the chips are multiple generations behind and the economics favor launching fresh satellites with current hardware rather than maintaining older ones. What else is on Orbital's roadmap? Autonomous robots? Strategic partnership with hyperscalers?
    Different class/size of orbital data centers e.g. SSO for training/LEO for inference? Self repairing ones? Right now we are focused on two things: our test mission and our factory. Orbital-1 launches in early 2027 In parallel,
    we are standing up Factory-1, a robotic satellite assembly facility in Los Angeles. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/pro/building-gigawatt-data-centers-in-the-us-is-beco ming-increasingly-difficult-why-orbital-is-taking-ai-infrastructure-into-space -to-solve-power-and-cooling-issues


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)