Chang Kyoung (CK) Choi
Dr. Choi is an Associate Professor
Michigan Technological University
Mechanical and Aerospace Engineering
Roberto Escobar
Email: bobby@elaw.business
Roberto "Bobby" Escobar is general counsel, and an environmental and labor and immigration advisor.
The explosive growth of artificial intelligence, cloud services and edge applications is not merely a software story; it is a profoundly physical one. Digital infrastructure must be built, powered, cooled and placed somewhere on the map. For most of computing history, that placement defaulted to wherever fiber was dense and capital was available. But as AI workloads intensify and sustainability pressures mount, geography has become a first-order strategic variable.
One increasingly important response to this shift is the rise of modular micro-data centers, or MMDCs. Unlike monolithic hyperscale campuses that require years to plan and build, MMDCs are prefabricated units integrating power, cooling and computing capacity that can be deployed in weeks and, in many cases, relocated as conditions change. Their portability turns geography from a fixed constraint into a strategic choice, and that choice increasingly hinges on one question: how is heat managed?
The cooling imperative
Cooling is the defining operational challenge of modern data infrastructure. Traditional air-cooling systems can consume up to 40% of a facility's total energy, and that burden rises under AI workloads, where rack densities can exceed 50 to 80 kW. The most effective antidote is free-air cooling, drawing outside air rather than running energy-intensive HVAC, which is viable only in consistently cool climates.
The stakes extend beyond electricity bills. In hot, arid regions, many data centers rely on evaporative cooling, making water demand a growing constraint. Phoenix-area data centers already consume roughly 385 million gallons of water per year for cooling alone. A March 2026 UC Riverside study conducted in collaboration with Caltech found that, without efficiency gains, U.S. data centers could require an additional 697 million to 1.45 billion gallons of additional peak water capacity per day by 2030, roughly equivalent to New York City's daily water supply, with associated infrastructure costs estimated at $10 billion to $58 billion. The study also emphasized that peak-day water demand, not merely annual consumption, may become the binding constraint for many local utilities.
Thermal stress on hardware is a compounding problem. Lower operating temperatures reduce wear on servers, extend equipment life span and may help slow capital replacement cycles. The energy, water and hardware arguments all point in the same direction: toward cooler ground.
Where the smart money is moving
The clearest global signal comes from the Nordic countries. Norway, for example, operates on nearly all-hydroelectric power and has become a model for low-emission, low-temperature computing environments. Large-scale AI deployments are increasingly being explored in Nordic regions because they combine renewable abundance, cool ambient temperatures and available land. Current market estimates place the Nordic data center market at about $7.2 billion in 2024 and project it to grow to roughly $14.9 billion by 2030, implying growth closer to 12.8% annually rather than the lower figure often repeated in secondary summaries.
Sweden has also emerged as a major renewable-backed data infrastructure market, with substantial announced power purchase agreements and continuing hyperscale interest. More broadly, the logic is straightforward: colder climates reduce mechanical cooling loads, lower power usage effectiveness and reduce water stress.
Within the United States, the Pacific Northwest has long hosted major hyperscale deployments for the same reasons: hydroelectric power, cool ambient temperatures and existing fiber density. The northern Midwest, including Iowa, Minnesota and Wisconsin, adds cold winters and growing wind capacity to the mix, with land costs that remain well below coastal markets. These regions allow operators to run free-air economizers for significant portions of the year, dramatically reducing the mechanical cooling burden. The Mountain West, including Utah and Colorado, occupies a middle tier: moderate climate, growing renewable capacity and increasing interest as a secondary hub. Northern Virginia remains the country's largest concentration of data center capacity despite comparatively warm summers and an increasingly congested power grid.
Why modular matters
Modular infrastructure is attractive not only because it is faster to deploy, but because it is easier to place strategically. Operators can add capacity in increments, test emerging demand nodes, and shift deployments as power, cooling and permitting conditions evolve. That flexibility is especially useful in an era when transmission bottlenecks, water availability and local opposition can all delay traditional builds.
The broader market signals are strong. Industry forecasts now place the global modular data center market in tens of billions of dollars, with rapid growth expected over the coming decade as operators seek faster deployment, standardized manufacturing and more controllable capital costs. At the same time, modularity is not a cure-all. Standard containerized footprints may not be ideal for the highest-density AI training clusters unless paired with advanced liquid-cooling designs, custom rack layouts or other engineering adaptations. Recognizing that limitation strengthens, rather than weakens, the case for modular systems, because it clarifies where they are most effective: distributed inference, regional compute, backup capacity and selective high-density deployments where cooling has been purpose-built.
The California paradox
California presents the sharpest tension in this geography. Its structural liabilities are real and worsening. Inland regions, including the Central Valley, Inland Empire and Southern California desert fringes, experience summer temperatures that turn cooling from an engineering challenge into an economic liability. Water scarcity makes evaporative cooling increasingly untenable and politically contested. In October 2025, Gov. Gavin Newsom vetoed legislation that would have required proposed and existing data centers to disclose water usage, underscoring how contested the issue has become in California policy circles. As data center development accelerates, regulatory pressure around water disclosure, grid interconnection and local permitting are likely to intensify, particularly in water-constrained regions.
California's electricity prices are among the highest in the nation, and grid decarbonization mandates add complexity to long-term procurement planning. Yet none of this makes California dispensable. The state is simply too large a demand center to be served remotely. Its population, concentration of AI companies, autonomous-vehicle ecosystem, logistics base, healthcare systems and entertainment infrastructure all generate latency-sensitive workloads that cannot be efficiently handled from a server farm in Oregon, Quebec or Finland.
What changes the equation is the growing ability of advanced cooling systems to reduce dependence on ambient outdoor conditions. Direct-to-chip and immersion liquid cooling can materially reduce cooling-related energy use compared with traditional HVAC-heavy designs, often in the range of 30% to 40%, depending on workload and architecture. The question is not whether to build in California. It is how to do it selectively.
Locating data centers in California
Within the state, microclimates matter enormously. The Northern California coast, including Humboldt County and the Mendocino corridor, benefits from persistent marine influence that keeps temperatures meaningfully lower than inland zones, with the added potential for offshore wind integration. The Bay Area's outer periphery offers cooler conditions than core Silicon Valley while retaining access to dense fiber infrastructure, though land costs remain elevated. Sierra Nevada foothills provide altitude-driven cooling advantages and relatively lower land costs. The region near the Oregon border combines cooler temperatures with proximity to the Pacific Northwest's existing data center corridor.
The zones to avoid are those that compound all of the state's worst characteristics simultaneously. The Inland Empire and Central Valley combine extreme summer heat, water stress and grid congestion. Desert deployments, unless paired with aggressive liquid cooling and dedicated renewable generation, are difficult to justify at scale.
A distributed architecture built around geography
The strategic answer emerging from both industry practice and the physical logic of computing is a three-tier distributed model.
First, heavy AI training workloads, the most compute-intensive and latency-tolerant jobs, belong in cool, energy-rich regions such as the Pacific Northwest, the Midwest, the Nordic countries and parts of Canada where hydroelectric power and cold climate converge. These locations capture the full energy-efficiency dividend, enabling lower PUE and lower water dependence.
Second, smaller California nodes should handle the latency-sensitive layer: real-time inference, autonomous systems, streaming delivery, healthcare AI and other applications where milliseconds matter. These facilities are better understood as selective, liquid-cooled regional nodes rather than large inland campuses.
Third, hyper-edge deployments embedded in buildings, campuses, telecom facilities, hospitals, logistics sites and industrial environments can handle the most time-critical processing within close physical range of the user or device.
Modular architecture is uniquely suited to this model because it enables dynamic workload placement: the ability to place workloads where energy is cheapest, cooling is most efficient, or latency requirements are lowest, without the years-long commitment of a fixed hyperscale build. Climate-aware workload scheduling extends this logic further. Operators can shift AI jobs toward cooler regions during peak summer heat while also aligning workloads with off-peak renewable generation across time zones.
The long view
The trajectory is toward the north and the coasts. Rising global temperatures will erode the cooling advantages of regions that are merely temperate today, pushing optimal deployment zones incrementally toward higher latitudes and marine-influenced climates. That is one reason the Nordic market's projected growth matters: it reflects not only current efficiency advantages but also a hedge against a warmer future.
California will not be left behind in this geography, but its role is specific. It is a demand hub, not a continental-scale computing farm. The modular micro-data center makes it possible to honor both realities at once: deploy efficiently at scale where the climate cooperates, while maintaining the latency-critical presence that California's economy demands. Geography, in this framework, is not a limitation. It is the strategy.