By: Thomas W. Wilson
Let’s cut through the hype: we’re pouring billions into AI monoliths—massive data temples sucking upwards to 1GW+ and more from overtaxed grids—while the very technology enabling AI could soon make these complexes as relevant as coal-fired power plants. The irony? We hear all of this talk about larger than GW sized AI Factories, landowners building out the trifecta and where energy supply is at such as shortage (not really, but rather the demand is scaling so large) that the key question now asked during acquisition is “what is your load ramp?” Our race to build AI’s physical backbone might be constructing its own obsolescence.
The relentless growth of AI workloads is colliding with fundamental infrastructure limitations: soaring AI Training power demands sometimes reaching 150kW per rack, inadequate cooling for heat-intensive GPUs requiring liquid solutions, and broadband capacity struggling under 40-60% compound annual growth in data center interconnect (DCI) traffic. This “trifecta” is compounded by scarce, expensive land and complex regulatory hurdles, thereby throttling the development of massive “AI factory” data centers despite surging capital investment.
While investment floods into these facilities— (particularly in the US, China, and EMEA regions) where governmental enthusiasm outpaces execution—multi-year construction delays persist, to where “tilt-up” is now an often tossed around term, driven by the need and desire for modular construction to speed up construction timetables.
In 2025 and prior to, capital originating in the GCC is injecting tens of billions into AI data center infrastructure across MEA, exemplified by Saudi Arabia’s $14.9 billion LEAP 2025 commitment, UAE’s $30-50 billion Franco-Emirati 1-gigawatt facility, and $500 million UAE-Turkey projects and others. Future investment is projected to escalate, driven by sovereign AI ambitions like Saudi Vision 2030 and UAE’s National AI Strategy 2031, with Saudi Arabia alone planning more than 2 GW of new data center capacity, which gives concern for mitigating key risks including finding ways to distribute AI builds through the use hybrid cloud models and colocation partnerships to distribute expenses, balancing geopolitical exposure, which may be addressed via diversified international alliances such as the region and Europe or the US and Asia (China), and ensuring talent capacity by creating upskilling initiatives tied to national AI workforce programs, allowing Investors to prioritize scalable, energy-efficient distributed architectures and ethical frameworks to align with both ROI targets and global compliance standards.
Simultaneously, engineering technology breakthroughs like 224 Gb/s-PAM4 interconnects, mathematical lossless data compressional algorithms, layer 2 UDP optical design and edge data center architectures are accelerating, challenging the necessity of centralized AI factories by enabling distributed, efficient processing closer to end-users. This technological race highlights a critical divergence: capital allocation increasingly outstrips feasible supply, even as innovations emerge that could fundamentally reshape AI infrastructure needs.
The Power Paradox
Today’s GPU farms aren’t just power-hungry; they’re energy hostages:
– Natural gas cartels dictating terms for 70%+ of operations
– Grids buckling under 900MW+ single-site loads
– Nuclear “solutions” requiring (6) six Bill Gates supported Terrapower Natrium nuclear plants (345MW each) just to power ONE complex, while trying not to think of other nuclear plants numbered in threes.
Renewables I like? But renewables for greater than 1GW Ai Factories? You’d need solar farms covering Rhode Island, or daisy chaining fuel cell plants (Bloom) or lateral generation (Mind Spring) like no body’s business to run Virginia’s AI corridor. SoCal is complaining about warehouses; wait till they see one of these shiny new giant structures. This isn’t sustainability—it’s energy desperation. It takes light roughly 13.2 milliseconds to travel from Los Angeles to New York. Recent signal tests on an optical line link took roughly 17 milliseconds to travel the same distance from San Francisco to Denver for example, so regional networks can certainly cut this down even further. The electronics are simply getting so good, that there soon won’t be much room to improve performance and that does not even account for translation algorithms either.
Enter Technology, where it can be an investor’s best friend (see leverage) but again, it can sometimes be an investor’s worst friend, rending billions in capital wasted and moot amongst the corn fields in rural country sides, in the US and abroad. What happens when technology advances to such point that GPU chips are more efficient, more productive and latency actually comes close to really meeting speed of light dimensions.
Mathematical Lossless Compression Algorithms: The Silent Disruptor
Enter non-classical mathematical lossless compression (estimates below):
– 50-70% data mass reduction evaporating storage needs
– 3.9× transmission acceleration overlaying advanced optical UDP streams
– 18% energy savings from reduced data movement
Along with this gain, think how this tech running on top of optical layer 2 UDP managed data flow can continue to increase data throughput transit speeds. And this is only the beginning. What other transit technology is being developed and unwrapped to bring down latency times, where the electron is capably able to travel at light speed timing.
This isn’t incremental improvement—it’s a physics rewrite. When you can daisy-chain multiple 50 to 100MW micro-centers across water-rich, low-cost regions with sub-4ms latency, why build billion-dollar power-gulping monoliths? Imagine SDAIN technology increasing capacity enabling on the fly control of AI workloads across multiple reasonable sized environmentally managed cost effective data centers across a plane of distance and managed dark fiber connectivity.
The Stranded Asset Time Bomb, just a thought. Never build the biggest house in a neighborhood.
Consider the brutal math:
| Risk Factor | Hyperscale DCs | SDAIN Microgrids |
| Construction Lead Time | 3 – 5 years | 9 – 12 months |
| Power Dependency | Grid Hostage | Water-cooled arbitrage |
| Cooling Efficiency | 1.1 PUE (Struggling) | .92 PUE (Achievable) |
| Obsolescence Horizon | 2028-2030 | N/A (Modular Upgrade) |
We are seeing investors financing 3-year construction projects in a market where outside of the box leverageable tech deployment cycles measure in months. The potential coming wave of stranded assets could make WeWork’s implosion look orderly.
Investor Crossroads
The choice isn’t “if” but “where” to bleed:
– Double down on dinosaurs: Chase tax abatements for rural mega-complexes while praying speed optimization and compression tech stalls
– Pivot to plasticity: Back software-defined AI networks (SDAIN) where workloads flow to:
– Water-rich zones (3000× cooling efficiency)
– Energy-arbitrage regions ($15/MWh valleys)
– Fiber-dense rural corridors (not “AI-ready” wastelands)
Land options widen immensely, land acquisition costs potentially decreasing, more efficient use of fiber capacity, stronger EDGE AI naturally developing, and possibly lowering cost of AI Factory builds. The cloud wars taught us this lesson already: distributed beats centralized. Yet here we are, repeating history with steel-and-concrete AI gas company and large land altars.
The Provocation
Asking the uncomfortable question: Are we building AI’s infrastructure or its museums? When optical and algorithm speed-enabled microgrids can assemble 500MW of distributed intelligence before a single hyperscale foundation cures, some of these media enriched current investment theses aren’t just risky—they’re architectural vanity. Energy dominance for these mega data center locations is the determining factor. What happens if latency can be minimalized to such a point, that the delay degradation is so minimal, that the time lost going a hundred or two hundred miles can offset the risk from mega center sites, which site location is sole determined based on energy supply by the grid or by immediate natural gas supply, which seems a little bit limiting. Counting on older vestiges of mine is bigger than yours, the smart money isn’t betting on bigger or larger—it’s betting on invisible, where infrastructure disappears into the landscape, workload by workload, byte by Xalyte compressed byte. Build fast and quick but intelligently. Pride of authorship is getting in the way sometimes it seems. Because in the end, Cinderella’s castle was just a pumpkin—and some of these Taj Mahal power-hungry AI palaces might turn to dust before the clock strikes 2035.
Tom Wilson – twilson@navitiventures.com / www.navitiglobalventures.com











