Engineering the Sky, Part II: The Industrialization of Orbital Compute Date: 2026-02-15 Author: John Brennan Source: https://johnbrennan.xyz/essay/engineering-the-sky-part-2 The transition to orbital compute is an energy and thermodynamics problem. This analysis examines heat rejection, solar mass efficiency, launch economics, Wright's Law cost compression, and the technology readiness levels that govern the pace of industrialization. --- This is Part II of the Engineering the Sky series. Read Part I: How a Million Satellites Could Rewire Launch, Cloud Computing, and Global Power. The transition to orbital compute infrastructure is not fundamentally a computing problem. It is an energy and thermodynamics problem expressed through aerospace engineering and industrial scale. Modern GPUs can already operate in vacuum environments, and commercial-off-the-shelf accelerators have demonstrated stable performance in orbit under software-managed radiation mitigation. The limiting variables are more structural: the ability to generate continuous electrical power, the ability to reject waste heat, and the ability to deploy and replenish this infrastructure at a cost curve that converges with terrestrial alternatives. Once those constraints are priced and bounded, orbital compute ceases to be speculative. It becomes an infrastructure system governed by the same thermodynamic and industrial laws that govern all energy-conversion systems. Part I established the physical premise. Compute is fundamentally an energy conversion process. Every floating-point operation ultimately manifests as heat, and the capacity to perform computation at scale depends not on transistor density alone, but on the ability to source energy and reject entropy. Terrestrial data centers solve this problem by connecting to electrical grids and dissipating heat through convective and evaporative cooling. Orbital compute removes these dependencies. Energy is available continuously through solar irradiance, and heat is rejected directly into space through radiation. This shift eliminates terrestrial infrastructure constraints but introduces new ones. Radiator mass, photovoltaic efficiency, launch cost, and manufacturing scale replace grid interconnection, land acquisition, and water availability as the governing variables. The Terrestrial Constraint The impetus for relocating high-density compute to orbit is emerging not from technological ambition, but from terrestrial constraint. Northern Virginia, historically the densest data center market in the world, now serves as a leading indicator of structural saturation. In November 2025, the Virginia State Corporation Commission approved a new electricity rate class, GS-5, specifically targeting large-scale users demanding more than 25 megawatts of continuous power. This policy shift marks a departure from the historical model in which hyperscale operators benefited from broad rate pooling and implicit infrastructure subsidies. Instead, hyperscale compute must now bear the direct cost of grid expansion required to support it. This regulatory shift coincides with deeper structural signals in wholesale electricity markets. Capacity auction prices within the PJM regional transmission organization increased by approximately 833 percent in the 2025–2026 cycle compared to the prior year. These price increases reflect not temporary scarcity but structural imbalance between compute-driven demand and generation capacity. Regional power demand attributable to data center expansion is projected to increase from approximately 25 gigawatts of operating capacity in 2024 to more than 106 gigawatts by 2035. At the same time, AI-optimized data centers operate at load factors approaching continuous utilization, eliminating the demand elasticity that historically allowed grids to accommodate them. These pressures propagate directly into the economic structure of compute infrastructure. Terrestrial hyperscale data centers typically require between seven million and twelve million dollars of capital expenditure per megawatt of IT capacity. Energy and cooling account for up to forty percent of lifetime operating costs. Large installations require millions of gallons of water per day for cooling, creating resource competition and regulatory friction in water-constrained regions. These constraints do not arise from technological limitations in compute hardware. They arise from the thermodynamic cost of managing energy and entropy within Earth's atmosphere. The Orbital Alternative Orbital infrastructure eliminates these terrestrial dependencies by relocating compute directly to the site of continuous energy availability. Solar irradiance beyond Earth's atmosphere averages approximately 1,361 watts per square meter and remains available nearly continuously in sun-synchronous orbital regimes. Unlike terrestrial solar generation, orbital solar power does not suffer from atmospheric attenuation, diurnal cycles, or weather variability. Energy generation becomes a function of photovoltaic surface area rather than geographic location. Yet the presence of abundant energy does not eliminate thermodynamic constraints. Every watt of electrical power consumed by compute hardware becomes waste heat. In terrestrial environments, this heat is removed through convection and evaporative cooling. In orbital environments, heat must be rejected radiatively. Radiative heat rejection is governed by emissivity, surface area, and temperature. At electronics-safe operating temperatures, practical radiator systems reject heat at rates measured in hundreds of watts per square meter. This relationship defines orbital compute density directly and, by extension, the structural mass and economic profile of any orbital data center architecture. The Fundamental Constraint: Heat Rejection Governs Compute Density Every compute system is fundamentally a heat engine. GPUs convert nearly all consumed electrical energy into heat. On Earth, that heat is removed through convection and evaporative cooling. In orbit, heat must be rejected radiatively. Radiative heat rejection operates under strict physical limits. At operational temperatures near 300 Kelvin, practical radiator systems reject approximately 350 watts per square meter. This constraint scales linearly, which is precisely why radiator surface area becomes the dominant determinant of compute density in vacuum environments. Table 1. Radiator scaling: area and mass implied by waste heat Compute Load Radiator Area Required Radiator Mass (@ 3.5 kg/kW) 200 W 0.6 m² 0.7 kg 1 kW 3.3 m² 3.5 kg 5 kW 14.3 m² 17.5 kg 50 kW 143 m² 175 kg 1 MW 2,857 m² 3.5 metric tons 1 GW 2.86 million m² 3,500 metric tons Radiators are therefore not auxiliary components. They are primary structural elements that determine system mass, launch cost, and economic feasibility. This also reframes the role of thermal management technologies. The constraint is not heat transport from chip to radiator. Thermal transport systems have demonstrated the ability to move kilowatt-class heat loads efficiently using two-phase cooling loops and deployable radiator architectures, with heat removal rates exceeding four kilowatts per loop and heat fluxes comparable to terrestrial liquid cooling systems. The binding constraint is the disposal of that heat at system scale: radiator area, radiator mass efficiency, deployment reliability, and long-duration emissivity stability. This distinction matters because it defines where scale must be earned. Incremental improvements in thermal transport are valuable but insufficient if radiator mass per kilowatt remains high. Improving radiator mass efficiency from 3.5 kilograms per kilowatt to 1 kilogram per kilowatt reduces launch mass by a factor of three. That reduction directly lowers the cost per deployed compute watt and changes the slope of the industrial cost curve. Radiator mass efficiency is therefore not merely an engineering metric. It is an economic variable. Power Availability Is Abundant, but Mass Efficiency Determines Economic Viability Solar power in orbit is structurally attractive. Irradiance beyond the atmosphere averages approximately 1,361 watts per square meter, and orbital capacity factors in appropriate regimes exceed ninety-five percent. This eliminates the intermittency and siting constraints that govern terrestrial solar installations. Yet orbital solar does not become economically meaningful through availability alone. It becomes meaningful through mass efficiency. High-efficiency photovoltaic arrays currently deliver between 150 and 250 watts per kilogram, with next-generation thin-film arrays targeting greater than 500 watts per kilogram. The difference is not cosmetic. It propagates through the entire architecture. At 500 W/kg, a 5 kW compute satellite requires approximately 10 kilograms of solar array mass. At 200 W/kg, the same satellite requires 25 kilograms. That 15-kilogram delta is paid for twice: once in launch cost and again in structural integration and deployment complexity. At constellation scale, solar array specific power becomes one of the most consequential levers in total cost of ownership. Solar power availability is therefore not the limiting constraint. Solar mass efficiency is. The economic frontier moves as photovoltaic watts per kilogram improves, because each gain reduces structural mass, lowers launch costs, and frees mass budget for compute payload or thermal rejection. Launch Cost as the Regime Switch The coupling between compute capacity and launch mass creates a direct dependence on launch economics. Every kilogram of radiator structure must be transported to orbit. Every kilogram of photovoltaic array must be deployed. Launch cost converts these requirements into capital expenditure. At launch costs exceeding one thousand dollars per kilogram, orbital compute remains constrained to specialized workloads. The system can justify itself where terrestrial constraints dominate: national security processing, sensor-adjacent autonomy, and in-situ orbital edge computing where downlink bottlenecks and latency costs exceed the premium of space deployment. At launch costs approaching one hundred dollars per kilogram—a threshold targeted by fully reusable heavy-lift systems—the economic structure changes. Transportation ceases to dominate total cost of ownership. The marginal cost driver shifts toward manufacturing efficiency, subsystem mass optimization, and operational lifetime. This is the same inflection that has appeared repeatedly in industrial history. When transportation becomes cheap and routine, the system reorganizes around production scale rather than delivery scarcity. Orbital compute follows this pattern. The Wright's Law Inflection Point: When Satellites Become Manufactured Products Once transportation cost becomes secondary, the economic trajectory of orbital compute becomes a manufacturing trajectory. Satellite production has already begun transitioning from bespoke aerospace fabrication toward industrialized manufacturing. Historically, spacecraft were built individually, with extensive manual integration and testing. That production model imposes structural cost limits because each unit carries significant overhead. Wright's Law provides a first-order model for how that overhead collapses with scale. Unit cost declines by a fixed percentage with each doubling of cumulative production. Using a conservative progress ratio of 0.8, the cost curve compresses rapidly as production volume increases. Table 2. Wright's Law cost compression for compute satellites (progress ratio 0.8) Satellites Produced Unit Cost Cost Reduction vs First Unit 1 $2,000,000 baseline 10,000 $100,000 95% reduction 100,000 $49,000 97.5% reduction 1,000,000 $23,000 98.8% reduction This transformation occurs because manufacturing changes form. Structural components standardize. Thermal systems become repeatable modules. Compute payload integration becomes routine. Satellite production ceases to resemble spacecraft construction and begins to resemble automotive manufacturing, where design stability and process control dominate the cost curve. However, Wright's Law is not a free variable. Cumulative production depends on deployment throughput. Launch cost must converge simultaneously, not only to reduce unit economics but to increase the deployment cadence required to reach the necessary production volumes. Launch cost reductions from approximately $2,000 per kilogram to below $150 per kilogram reduce transportation cost by more than an order of magnitude. As launch becomes cheaper and more frequent, the binding constraint shifts toward launch cadence: how many tons per year can be transported into orbit at acceptable reliability and cost. This is a throughput problem as much as a cost problem. Table 3. Launch cadence requirements at scale (100 metric tons per launch) Constellation Size Satellite Mass Total Mass Launches Required 10,000 60 kg 600 tons 6 100,000 60 kg 6,000 tons 60 1,000,000 60 kg 60,000 tons 600 At full industrial cadence, this deployment rate is plausible within a decade. But it requires two conditions. First, a fully reusable heavy-lift vehicle must operate at airline-like cadence. Second, satellite manufacturing must be capable of producing compute-class satellites at volumes high enough to fill the launch schedule. This is precisely why launch cadence and manufacturing ramp cannot be treated as separate variables. They are coupled. Cumulative production drives cost reduction, but cost reduction only becomes meaningful if deployment throughput allows production volume to accumulate. For a detailed breakdown of the specific TRL and performance targets required for orbital compute to achieve a 30% cost advantage over terrestrial alternatives, see the companion infographic: TRL & Performance Targets for 30% Orbital Cost Advantage. Technology Readiness Determines the Pace of Industrialization Orbital compute infrastructure is not limited by unknown physics. It is limited by subsystem maturity. The system is composed of components that have achieved different levels of readiness, and the least mature component sets the pace of deployment at scale. Table 4. Subsystem maturity: current TRL vs required TRL for industrial-scale orbital compute Subsystem Current TRL Required TRL Solar arrays 9 9 Compute hardware 7–8 9 Radiator systems 3–4 8–9 Thermal transport loops 5–6 8–9 Optical networking 5 8–9 Launch vehicles 7 9 Manufacturing systems 8–9 9 Radiator systems represent the largest maturity gap. They must achieve the reliability of solar arrays, not at prototype scale but across millions of deployment cycles. This requires not only engineering design maturity but manufacturing maturity: repeatable deployment mechanisms, coatings that maintain emissivity under radiation and thermal cycling, and designs that tolerate small manufacturing variations without catastrophic failure. Compute hardware maturity is advancing rapidly. Commercial processors have demonstrated stable operation in orbit, achieving performance improvements exceeding 75× relative to legacy radiation-hardened systems. This validates the broader premise that "hardened-by-software" approaches can close much of the historical performance gap. But reaching TRL 9 for compute payloads requires long-duration validation under sustained thermal cycling and radiation exposure. It requires not only that the GPU runs, but that it runs with predictable fault rates and manageable degradation over multi-year lifetimes. Launch system maturity is advancing toward full reuse. Fully reusable launch vehicles are the single most important economic lever because they reduce both marginal cost and increase cadence. Manufacturing systems are already approaching the required scale for high-volume satellite production. The remaining challenge is the integration of compute-class payloads and thermal architectures into a standardized product that can be produced with automotive-like reliability. Optical networking maturity remains a gating factor for distributed workloads. Independent nodes can deliver value, particularly for orbital edge computing and sensor-adjacent processing. But the long-term economic case for orbital compute as a general infrastructure layer strengthens as inter-satellite networking supports distributed training and inference across clusters. Networking therefore acts as a capability multiplier, widening the addressable workload set as it matures. Constellation Scaling: From Kilowatts to Gigawatts Once power generation, thermal rejection, launch economics, and manufacturing scale are modeled as coupled variables, constellation scaling becomes arithmetic. Total compute capacity is the product of per-satellite power and satellite count. Table 5. Constellation compute scaling Satellites Compute per Satellite Total Compute Capacity 10,000 1.2 kW 12 MW 100,000 1.2 kW 120 MW 1,000,000 1.2 kW 1.2 GW 1,000,000 5.2 kW 5.2 GW At one million satellites, orbital compute capacity approaches that of the largest terrestrial hyperscale campuses. This scale is not speculative. It is the direct extrapolation of demonstrated subsystem capabilities under Wright's Law cost convergence and plausible launch cadence. But it only becomes plausible if radiator mass efficiency improves, photovoltaic specific power improves, and deployment throughput accelerates. Without those improvements, constellation scale exists only on paper because structural mass and deployment friction remain prohibitive. The Deployment Timeline: Industrial Learning, Not Scientific Breakthrough Orbital compute deployment follows an industrial, not scientific, timeline. The governing uncertainties are not physical feasibility but maturity, reliability, and cost compression across subsystems. Table 6. Industrialization roadmap for orbital compute Period Deployment Phase Compute Scale 2025–2028 Prototype deployments 10–100 kW 2028–2032 Early constellations 10–100 MW 2032–2038 Industrial expansion 100–500 MW 2038–2045 Gigawatt infrastructure 1–5 GW Each phase corresponds to an increase in production volume, which drives Wright's Law cost reduction, and a simultaneous increase in subsystem maturity. Thermal systems must mature. Manufacturing must scale. Launch cadence must increase. Radiator deployments must become routine. Fault rates must become predictable. Replenishment must become logistics rather than bespoke mission planning. The key point is that the system does not scale by making GPUs faster. It scales by making the supporting infrastructure predictable. Predictability, in this context, is a form of maturity. TRL 9 is not simply "works once." It is "works routinely," "fails rarely," and "fails gracefully when it does." The Convergence Condition: When Compute Follows Energy At constellation scale, orbital compute infrastructure begins to behave less like a satellite network and more like a distributed energy conversion system. Each satellite converts solar energy into electrical energy, consumes that energy in computation, and rejects entropy through radiative cooling. Multiplied across millions of units, this process aggregates into gigawatt-class compute capacity. This transition introduces a structural inversion in the relationship between energy and compute. Historically, compute infrastructure has been constrained by proximity to terrestrial energy distribution networks and cooling resources. Orbital infrastructure removes this constraint by locating compute directly at the site of continuous energy availability. Compute no longer follows grid infrastructure. Grid infrastructure becomes irrelevant to orbital compute. This is why the scaling trajectory is governed by industrial learning curves rather than scientific breakthroughs. Improvements in photovoltaic efficiency reduce mass per watt. Improvements in radiator mass efficiency reduce structural overhead. Improvements in launch reuse reduce transportation cost and increase cadence. Improvements in manufacturing automation reduce unit cost and increase throughput. These improvements compound multiplicatively, driving predictable cost convergence. The transition from experimental capability to industrial infrastructure will unfold gradually. Early deployments validate subsystem integration and thermal stability. Intermediate deployments prove manufacturing repeatability and long-duration operations. Large-scale deployments demonstrate declining unit costs consistent with Wright's Law and replenishment cycles that resemble fleet logistics. The system becomes infrastructure only when it becomes boring: repeatable deployments, predictable degradation, standardized maintenance, and costs that fall with volume rather than rise with complexity. Orbital compute infrastructure does not emerge because terrestrial compute becomes impossible. It emerges because orbital energy conversion becomes economically competitive for certain classes of workloads under terrestrial constraint. Initially, those workloads will be those most constrained by grid scarcity, cooling limitations, and permitting friction. Over time, as manufacturing scale increases and subsystem efficiency improves, the economic viability expands. What begins as a niche for sensor-adjacent autonomy and energy-constrained processing can evolve into a persistent infrastructure layer for portions of the global compute economy. The governing variables in this transition are not unknown. They are measurable and predictable. Radiator mass efficiency, photovoltaic specific power, launch cost per kilogram, manufacturing cost per unit, networking maturity, and operational lifetime determine economic viability. Improvements in each variable reduce cost per deployed compute watt and expand the feasible design space. Orbital compute infrastructure represents the extension of industrial energy systems into a thermodynamic environment where energy is abundant and entropy rejection is unconstrained by atmosphere. The limiting variables are industrial and economic rather than scientific. The industrialization of orbital compute is therefore not a departure from existing infrastructure evolution. It is its continuation under a new thermodynamic regime. Works Cited Orbital Power, Thermal Management, and Space-Based Energy NASA. Space-Based Solar Power: Lifecycle Cost and Feasibility Analysis. https://www.nasa.gov/ Caltech Space Solar Power Project. Space Solar Power Demonstrator (SSPD-1) Mission Overview. https://www.caltech.edu/ NASA Technical Reports Server. Deployable Radiator and Spacecraft Thermal Control Studies. https://ntrs.nasa.gov/ European Space Agency. Spacecraft Thermal Control Technologies and Radiator Systems. https://www.esa.int/ Air Force Research Laboratory. High-Power Smallsat Thermal Management Research. https://afresearchlab.com/ Orbital Compute Hardware and Spaceborne Computing Hewlett Packard Enterprise. Spaceborne Computer Missions and Orbital Compute Performance. https://www.hpe.com/us/en/compute/hpc/spaceborne-computer.html NASA Jet Propulsion Laboratory. Commercial Computing Reliability in Space Environments. https://www.jpl.nasa.gov/ NVIDIA. H100 GPU Architecture and Accelerator Performance Overview. https://www.nvidia.com/en-us/data-center/h100/ NVIDIA. Jetson Embedded Compute Platforms Technical Documentation. https://developer.nvidia.com/embedded-computing Ramon.Space. Radiation-Resilient Space Computing Systems. https://ramon.space/ Phison Electronics. Space-Qualified Solid-State Storage Systems. https://www.phison.com/ Optical Communications and Distributed Orbital Networking Tesat-Spacecom. Laser Communication Terminal Technology Overview. https://www.tesat.de/ Mynaric. Optical Inter-Satellite Communication Systems. https://mynaric.com/ NASA. Optical Communications and Sensor Demonstration Program. https://www.nasa.gov/ European Space Agency. Optical Communications in Space Infrastructure. https://www.esa.int/ Launch Systems, Satellite Manufacturing, and Deployment Economics SpaceX. Starship Launch System and Fully Reusable Launch Architecture. https://www.spacex.com/vehicles/starship/ SpaceX. Starlink Constellation and Satellite Manufacturing Overview. https://www.spacex.com/ Federal Aviation Administration. Commercial Space Transportation Forecasts. https://www.faa.gov/space/ Rocket Lab. Satellite Manufacturing and Launch Integration Systems. https://www.rocketlabusa.com/ Blue Origin. Heavy-Lift Launch Systems and Orbital Infrastructure Vision. https://www.blueorigin.com/ NASA. Launch Cost Modeling and Payload Economics Analysis. https://ntrs.nasa.gov/ Terrestrial Data Center Economics and Power Infrastructure Virginia State Corporation Commission. Electricity Rate Case GS-5: Large Load Infrastructure Pricing. https://www.scc.virginia.gov/ PJM Interconnection. Capacity Market Auction Results and Load Forecasts. https://www.pjm.com/ U.S. Energy Information Administration. Electric Power and Data Center Energy Consumption Reports. https://www.eia.gov/ International Energy Agency. Data Centres and Energy Demand Analysis. https://www.iea.org/ Lawrence Berkeley National Laboratory. United States Data Center Energy Usage Report. https://eta.lbl.gov/ McKinsey & Company. Global Data Center Demand and Infrastructure Forecast. https://www.mckinsey.com/ Manufacturing Economics and Wright's Law Wright, T. P. Factors Affecting the Cost of Airplanes. Journal of the Aeronautical Sciences, 1936. https://arc.aiaa.org/ NASA. Cost Estimating Handbook and Learning Curve Models. https://www.nasa.gov/ Boston Consulting Group. Experience Curve and Industrial Cost Reduction Analysis. https://www.bcg.com/ Our World in Data. Solar Photovoltaic Cost Decline and Learning Curve Analysis. https://ourworldindata.org/ Satellite Systems, Orbital Infrastructure, and Industry Analysis Satellite Industry Association. State of the Satellite Industry Report. https://sia.org/ Euroconsult. Satellite Manufacturing and Launch Market Forecast. https://www.euroconsult-ec.com/ Union of Concerned Scientists. Satellite Database and Constellation Tracking. https://www.ucsusa.org/resources/satellite-database Spaceflight Now. Satellite Launch Cadence and Constellation Deployment Reporting. https://spaceflightnow.com/ Continue reading: Part I: How a Million Satellites Could Rewire Launch, Cloud Computing, and Global Power --- Canonical: https://johnbrennan.xyz/essay/engineering-the-sky-part-2