#675: The Intelligence Factory: How AI is Rebuilding the Cloud

From liquid cooling to nuclear power, Herman and Corn explore how AI is transforming data centers into high-density "intelligence factories."

0:000:00
Episode Details
Published
Duration
30:08
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

In a recent episode of My Weird Prompts, hosts Herman and Corn Poppleberry took a deep dive into the rapidly evolving world of data center architecture. Recorded in early 2026, the discussion centered on a fundamental shift in how the world’s digital infrastructure is built. As Herman noted, the cloud is no longer just "someone else’s computer" used for storing photos or emails; it has transformed into a massive, physical "intelligence factory" designed to churn out the next generation of artificial intelligence.

From Digital Libraries to Intelligence Factories

The core of the discussion focused on the transition from traditional central processing units (CPUs) to clusters of graphics processing units (GPUs). Herman explained that before the AI boom of 2022, data centers functioned like digital libraries. They were optimized for "North-South" traffic—data moving from the internet to a server and back to a user. These facilities were predictable, drawing manageable amounts of power that could be cooled with standard air conditioning.

However, modern AI workloads have changed the math. AI training requires "East-West" traffic, where thousands of GPUs must communicate with one another constantly. This shift has turned data centers into synchronous supercomputers. Herman highlighted that while a traditional server rack might draw five to ten kilowatts of power, a modern AI rack filled with cutting-edge chips can draw upwards of 150 kilowatts. This 15-fold increase in power density is forcing engineers to rethink every aspect of building design, from the thickness of copper power lines to the structural integrity of the floors.

The End of Air Cooling

One of the most significant insights from the episode was the physical limit of air cooling. Herman described how trying to cool a 150-kilowatt rack with fans would require air moving at "hurricane speeds," creating a noise level comparable to a jet engine. This has made liquid cooling a mandatory requirement for the AI era.

The brothers discussed various methods of liquid cooling, ranging from "rear-door heat exchangers"—which function like a car’s radiator—to "direct-to-chip" cooling. In the latter, cold plates are mounted directly onto the GPUs, with liquid circulating through the hardware to carry heat away. This transition highlights a strange paradox in modern tech: the most advanced neural networks on Earth are now entirely dependent on high-end plumbing.

The "Greenfield" Advantage

A major theme of the conversation was whether newer, AI-first companies have an advantage over established tech giants. Herman argued that "greenfield" projects—facilities built from the ground up for AI—possess a distinct edge. Legacy providers like Amazon and Microsoft face the "nightmare" of retrofitting older data centers that were never designed for the weight of liquid-cooled racks or the extreme power requirements of modern GPUs.

Newer players can build "slab-on-grade" floors to support massive weight and design buildings specifically around liquid-to-liquid cooling loops. They can also optimize for the "AI Infrastructure Tug-of-War." Because the speed of light is constant, GPUs must be packed tightly together to reduce latency, even though physics dictates they should be spread apart to manage heat. Only specialized, newly built facilities can successfully balance these competing needs.

The Nuclear Renaissance

Perhaps the most striking takeaway from the episode was the scale of energy required to sustain this growth. Herman pointed out that we are entering the era of the "gigawatt-scale" data center—facilities that require as much power as 750,000 homes. Because existing power grids cannot keep up with this demand, tech companies are increasingly becoming energy companies.

Herman and Corn discussed the "Nuclear Renaissance" currently underway, citing Microsoft’s move to help restart a reactor at Three Mile Island and Google’s interest in Small Modular Reactors (SMRs). The cloud has moved past its "utility phase," where compute was a simple resource like water. Today, it is a specialized industrial process that is reshaping the global energy landscape and the very physical structures that house our digital world.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #675: The Intelligence Factory: How AI is Rebuilding the Cloud

Daniel Daniel's Prompt
Daniel
Hi Herman and Corin. Data centers have emerged into the news lately from relative obscurity, and I’d like to discuss how they work specifically in the context of the AI era and how that has changed their requirements and the types of facilities being built.

Traditionally, cloud computing provided "elastic compute" and "elastic storage," but lately, we’re seeing more cloud service providers specialized in AI. I’ve realized that traditional data centers weren't optimized for GPU workloads; before AI, servers typically relied on CPU, RAM, and storage. Now, there has been a radical shift toward massive clusters of GPUs.

From an architecture standpoint, how much has this dramatic change in hardware requirements altered how data centers are put together, from rack configuration to cooling? Are these new, AI-first cloud companies at a significant advantage because they are building from scratch, while established providers have to re-architect their existing facilities to optimize for VRAM?
Corn
Hey everyone, welcome back to My Weird Prompts. I am Corn, and I am sitting here in our living room in Jerusalem. It is a beautiful February afternoon in twenty twenty-six, and the light is hitting the Old City walls just right. I am here with my brother, the man who probably has more tabs open about server architecture than anyone else on the planet, Herman Poppleberry.
Herman
Herman Poppleberry, reporting for duty. It is good to be here, Corn. I have been looking forward to this one because the world of infrastructure is finally having its moment in the absolute center of the spotlight. It used to be that only the deepest nerds cared about what was happening in a windowless building in Northern Virginia or Iowa, but now, it is the lead story on every financial and tech news site.
Corn
It really is. You know, for the longest time, the cloud was just this abstract concept for most people. It was where your photos lived or where your emails were stored. It was "someone else’s computer," and we didn't really care what that computer looked like. But lately, the physical reality of the cloud is becoming impossible to ignore. It is consuming massive amounts of power, it is changing the real estate market, and it is even restarting nuclear reactors. Today’s prompt from Daniel is about exactly that. He is curious about how data centers have changed in the era of artificial intelligence. Specifically, how the shift from traditional central processing units to massive clusters of graphics processing units has altered the very architecture of the buildings themselves.
Herman
It is a fascinating shift, Corn. Daniel mentioned looking at photos of data centers and thinking they look like low-rise cow sheds spread over huge swaths of land. And honestly, he is not wrong. From the outside, a data center is one of the most boring-looking buildings on earth. It is a windowless gray box. But what is happening inside those boxes right now is a radical departure from everything we have built over the last twenty years. We are moving from the era of the "digital library" to the era of the "intelligence factory."
Corn
Right, and Daniel’s question about whether new, artificial intelligence-first cloud companies have an advantage over the established giants is a great one. If you are building from scratch for GPUs in twenty twenty-six, are you in a better position than someone trying to retro-fit a facility built in two thousand ten?
Herman
The short answer is yes, but the long answer involves a complete rethinking of thermodynamics, electrical engineering, and networking. We should probably start by defining what the traditional data center was actually optimized for. Before the artificial intelligence boom—let's say, pre-twenty twenty-two—the primary goal of a data center was high availability for general-purpose compute. You had rows and rows of server racks, each filled with servers running CPUs. These were great for web hosting, databases, and running your average enterprise software. They were designed for "North-South" traffic—data coming in from the internet, being processed, and going back out to a user.
Corn
And those traditional setups were relatively predictable, right? I mean, you knew how much power a CPU-based server would draw, and you knew how much heat it would generate. It was manageable with standard air conditioning.
Herman
Exactly. In a traditional data center, a single rack might draw somewhere between five and ten kilowatts of power. You could cool that by just blowing cold air under a raised floor and through the racks. It was like cooling a room with a few high-end gaming PCs. The "hot aisle, cold aisle" configuration was the gold standard. But when you move to artificial intelligence workloads, specifically training large language models or running massive inference clusters, the power density explodes. We aren't talking about a small increase. We are talking about a total transformation. A single modern server rack today, filled with Nvidia Blackwell or the newer Rubin chips that are just starting to hit the floor, can draw one hundred, one hundred twenty, or even one hundred fifty kilowatts.
Corn
One hundred fifty kilowatts? Herman, I am trying to visualize that. That is like trying to run the power of an entire neighborhood through a space the size of a refrigerator. How do you even get that much electricity into a single rack without the wires just melting?
Herman
You hit the nail on the head. That is the first major architectural change: power delivery. In the old days, you could bring power in at standard voltages and distribute it across the floor. Now, we are seeing data centers bring medium-voltage power directly to the row or even the rack. We are seeing a shift from twelve-volt power delivery on the motherboard to forty-eight-volt or even higher to reduce resistive losses. If you don't do that, the "bus bars"—the big copper rails that carry electricity—would have to be as thick as your arm.
Corn
And then there is the heat. If you are pumping one hundred fifty kilowatts of power into a rack, almost all of that is coming out as heat. You can't just use a big fan for that, can you?
Herman
No way. Air is actually a terrible conductor of heat compared to liquid. We have reached the physical limit of air cooling. To cool a one hundred-kilowatt rack with air, you would need fans spinning so fast and moving so much volume that the noise would be deafening—like standing behind a jet engine—and the air would be moving at hurricane speeds. It is just not practical. This is why liquid cooling has gone from a niche enthusiast thing for overclocking gamers to an absolute requirement for modern artificial intelligence data centers.
Corn
So, what does that look like in practice? Are we talking about pipes running into every server?
Herman
Yes, exactly. There are a few main flavors. The "entry-level" for AI is rear-door heat exchangers. You basically replace the back door of the server rack with a giant radiator, like the one in your car, with chilled water running through it. The server fans blow hot air through that radiator, and the water carries the heat away. But for the really high-end stuff—the clusters training the next generation of models—we use direct-to-chip cooling. Cold plates are mounted directly onto the GPUs and the high-bandwidth memory. Liquid circulates through those plates, picks up the heat at the source, and carries it out to a cooling tower or a heat exchanger.
Corn
It is funny to think that the most advanced technology on the planet, these massive neural networks, are ultimately dependent on plumbing. We are basically building giant, high-tech water-cooled engines.
Herman
It really is plumbing. And that brings us back to Daniel’s question about the "greenfield" advantage. If you are building a new facility today—what we call a "greenfield" project—you are designing the floor to support the weight of these massive liquid-cooled racks. A liquid-cooled rack is significantly heavier than an air-cooled one because of the fluid, the manifolds, and the denser hardware. If you are an established provider like Amazon or Microsoft, you have hundreds of existing data centers that were built with raised floors designed for lighter loads and air cooling. Retrofitting those is a nightmare. You have to reinforce the concrete, rip up the floors to install heavy-duty piping, and often, you have to reduce the number of racks you can fit because the power and cooling infrastructure takes up so much more space.
Corn
So, the "AI-first" companies like CoreWeave or Lambda Labs, or even the specialized builds for X-A-I, they are building these "supercomputer-as-a-service" facilities from the ground up. They don't have to worry about legacy support.
Herman
Exactly. They can optimize every single inch. For example, they can build "slab-on-grade" floors instead of raised floors. They can design the entire building around a "liquid-to-liquid" cooling loop. They can even place the data center in a location where the outside air is cold enough to provide "free cooling" for the water loops most of the year. But beyond the cooling and the power, the second big shift is the "interconnect." This is something Daniel touched on when he mentioned the shift from CPUs to GPUs.
Corn
Right, in a traditional setup, servers are somewhat independent. If I am hosting a website, my server doesn't really care what the server next to it is doing. But AI is different.
Herman
It is fundamentally different. Training a large model is a "synchronous" process. You have thousands, sometimes tens of thousands of GPUs that all need to talk to each other constantly. They are essentially acting as one giant, distributed supercomputer. In a traditional data center, the "networking" was designed for that North-South traffic I mentioned—user to server. But in an AI cluster, the traffic is "East-West"—server to server.
Corn
And that traffic is massive, right?
Herman
It is staggering. We are talking about networking speeds of four hundred gigabits or eight hundred gigabits per second per link, and we are already moving toward one point six terabits. To make this work, you need specialized networking like Nvidia’s InfiniBand or the new Ultra Ethernet Consortium standards. And here is the architectural kicker: because the speed of light is a constant, the physical distance between these GPUs matters. If your cables are too long, the "latency"—the time it takes for data to travel—slows down the entire training process. This means you have to pack the racks as tightly as possible.
Corn
So, we have a paradox. We have these racks that are generating record-breaking amounts of heat, which suggests we should spread them out. But the networking requirements say we have to pack them together as tightly as possible for speed.
Herman
That is the "AI Infrastructure Tug-of-War." Physics wants them apart; logic wants them together. The only way to win that war is with extreme liquid cooling and incredibly dense power delivery. This is why the "cow shed" analogy is actually quite apt. These buildings are becoming highly specialized shells for a single, massive machine. We are seeing the rise of "megacampuses" where you might have a gigawatt of power—enough to power seven hundred fifty thousand homes—dedicated to a single site.
Corn
A gigawatt? That is a staggering number. I remember reading that Microsoft and OpenAI were talking about a project called "Stargate" that could cost a hundred billion dollars and require that kind of power. Is that where we are headed?
Herman
We are already there. In twenty twenty-five, we saw the first "gigawatt-scale" data center plans get approved. But finding a gigawatt of power on the existing grid is almost impossible. This is why we are seeing the "Nuclear Renaissance" in the data center world. Microsoft made headlines by helping to restart a reactor at Three Mile Island. Amazon bought a data center campus directly connected to a nuclear plant in Pennsylvania. Google is looking at Small Modular Reactors, or SMRs. These companies are becoming energy companies because the grid simply cannot keep up with the "AI-first" demand.
Corn
It feels like we are moving away from the "utility" phase of the cloud. It used to be like a water company—you turn on the tap, and the compute flows. Now, it feels more like a specialized industrial process.
Herman
That is a great way to put it. Daniel mentioned "elasticity." In the traditional cloud, elasticity was the killer feature. You could spin up a thousand virtual machines for an hour and then turn them off. But you can't really do that with a twenty-thousand-GPU cluster. Those clusters are so expensive—billions of dollars in hardware—that they need to be running at ninety-nine percent utilization twenty-four-seven to make the economics work. You don't "burst" into a Blackwell cluster; you reserve it months or years in advance. It is more like renting time on a particle accelerator than buying a utility.
Corn
Let’s talk about the VRAM issue Daniel mentioned. VRAM, or Video Random Access Memory—though in the data center we usually call it HBM, or High Bandwidth Memory—is the specialized memory that sits right next to the GPU. Why is that such a bottleneck for the architecture?
Herman
Because in AI, data movement is everything. The "weights" of the model—the billions of parameters that make it smart—have to be stored in that high-speed memory so the GPU can access them instantly. If the model is bigger than the memory on one GPU, you have to split it across multiple GPUs. This is where the "interconnect" we talked about becomes the lifeblood of the system. If your networking is slow, the GPUs sit idle waiting for data, which is like burning money.
Corn
And Daniel asked if established providers have to "re-architect for VRAM." Is that a physical building change or a hardware change?
Herman
It is both. On the hardware side, you are constantly cycling through generations. We went from the H-one-hundred with eighty gigabytes of HBM to the B-two-hundred with one hundred ninety-two gigabytes, and the upcoming Rubin chips will push that even further. But the "re-architecting" for the building comes down to the fact that these memory-heavy chips require more power and more cooling per square inch. A "legacy" data center might have the physical space for a thousand GPUs, but it might only have the power and cooling for a hundred. So you end up with a building that is ninety percent empty space because the "density" of the AI hardware has outpaced the "capacity" of the building.
Corn
That sounds incredibly inefficient. If I am a landlord of a traditional data center, I am looking at a lot of "dead" square footage.
Herman
Exactly. And that is why the specialized providers have the advantage. They don't build "empty" space. They build high-density cells. They might have ceilings that are thirty feet high to allow for massive overhead cooling pipes and power bus bars. They might not even have a "floor" in the traditional sense, just a structural grid to hold the racks.
Corn
I want to go back to the "cow shed" thing. Daniel mentioned they look like sheds in the middle of nowhere. Why are we seeing this shift toward rural locations? Is it just about the land being cheap?
Herman
Land is part of it, but the real drivers are "Power, Pipes, and Pints." You need a massive connection to the electrical grid—the "Power." You need high-capacity fiber optic lines—the "Pipes." And you need "Pints"—millions of gallons of water for cooling. In a crowded city like London or New York, you can't easily get a gigawatt of power or the water rights to cool a massive cluster. But in rural Iowa, or the plains of Texas, or the fjords of Norway, you can.
Corn
And as you mentioned earlier, for training a model, the "latency" to the end-user doesn't matter. If I am training "GPT-Six," it doesn't matter if the data takes thirty milliseconds to get to me. I just need the training to finish.
Herman
Precisely. This has led to a "decoupling" of the data center market. We now have "Training Centers" and "Inference Centers." The training centers are the giant "cow sheds" in the middle of nowhere. They are the factories. The inference centers—where the AI actually answers your questions in real-time—still need to be near the cities to keep the response time snappy. But even those inference centers are being forced to upgrade. Even "running" a large model is significantly more power-intensive than serving a webpage.
Corn
So, even the "edge" of the network is getting hotter and hungrier.
Herman
It is. We are seeing a "trickle-down" of liquid cooling. We are starting to see liquid-cooled racks appearing in "Colocation" facilities in downtown areas where they used to only have air cooling. It is a total transformation of the stack.
Corn
What about the "Sovereign AI" trend? I have been hearing that countries are now building their own national AI data centers. How does that fit into this architectural shift?
Herman
It is a huge driver. Countries like Saudi Arabia, the UAE, and even smaller nations in Europe are realizing that "compute" is a national resource, like oil or grain. They don't want their national data being processed in a "cow shed" in Virginia. So they are building their own. And because they are starting now, in twenty twenty-six, they are building "AI-first" from day one. They are skipping the whole CPU-era architecture and going straight to liquid-cooled, high-density GPU clusters. In a way, they are "leapfrogging" the old infrastructure, much like some countries skipped landlines and went straight to mobile phones.
Corn
That is a powerful analogy. So, to answer Daniel’s question directly: yes, the newcomers have a massive advantage in efficiency and density. But the incumbents—the Amazons and Googles—have something the newcomers don't: "Gravity."
Herman
Right. They have the existing data. If your company’s entire database is already in AWS, you are much more likely to use their AI tools, even if the underlying data center is a retrofitted older building, because moving petabytes of data to a new provider is expensive and slow. The "Big Three" are using their "Data Gravity" to buy themselves time while they frantically build new AI-specific zones.
Corn
It’s a race between "New Infrastructure" and "Old Data."
Herman
Exactly. And the scale of the investment is just mind-blowing. We are seeing capital expenditure numbers that look like the GDP of mid-sized countries. All to build these "factories of intelligence."
Corn
I wonder about the environmental impact, Herman. We talk about liquid cooling and nuclear power, but the sheer amount of water and electricity is a point of contention in a lot of these rural communities.
Herman
It is the biggest challenge the industry faces. In twenty twenty-four and twenty twenty-five, we saw several major projects get blocked by local communities worried about their water tables. This is why "Closed-Loop" liquid cooling is becoming the standard. Instead of evaporating water to cool the racks, you circulate the same water over and over, using massive fans to chill it—essentially a giant version of the radiator in your car. It is more expensive to build, but it uses almost no "consumptive" water.
Corn
It seems like every time we solve a physics problem, we run into a resource problem, and then an engineering problem.
Herman
That is the history of the data center. It is a constant battle against the second law of thermodynamics. Entropy always wins in the end, but we are getting very good at delaying it.
Corn
Let’s talk about the "VRAM" part of Daniel’s prompt one more time. He asked about "optimizing for VRAM." In twenty twenty-six, we are seeing things like "Unified Memory Architecture" and "C-X-L"—Compute Express Link. How does that change the physical layout?
Herman
That is a great technical deep-dive. Traditionally, the GPU memory was a silo. If the data wasn't on the chip, the chip couldn't use it. But with C-X-L and newer versions of NV-Link, we are starting to see "Memory Pooling." You can have a rack where the memory is somewhat "decoupled" from the processors. This allows for even more flexibility, but it requires even more complex cabling. We are seeing fiber optic cables being used inside the rack to connect chips because copper wires just can't carry enough data over those distances anymore.
Corn
Fiber optics inside the rack? That sounds incredibly delicate and expensive.
Herman
It is. We are moving toward "Silicon Photonics," where the light-based communication happens right on the chip package. This reduces heat and increases speed. It is another example of how the "architecture" isn't just the building; it is the "micro-architecture" of how the components talk to each other.
Corn
So, if Daniel were to walk into a state-of-the-art data center today, in February twenty twenty-six, what would he see that would look different from ten years ago?
Herman
First, he would notice the silence—or at least, a different kind of noise. Instead of the high-pitched whine of thousands of small fans, he would hear the low hum of massive pumps and the rush of liquid. Second, he wouldn't see "raised floors." He would be walking on solid concrete. Third, he would see thick, insulated pipes painted in bright colors—blue for cold, red for hot—running everywhere, looking more like a chemical plant than a computer room. And finally, he would see the racks themselves. They wouldn't be the thin, airy things of the past. They would be dense, heavy, "monolithic" blocks, glowing with the status lights of thousands of interconnected GPUs.
Corn
It sounds like a scene from a sci-fi movie. But it is the reality of how our emails are being drafted and our images are being generated.
Herman
It is the "physical body" of the AI. We spend so much time talking about the "mind"—the algorithms—but the body is this massive, thumping, liquid-cooled organism of silicon and copper.
Corn
I think we should talk about the practical takeaways for our listeners. If you are a developer or a business leader, why does this "plumbing" matter to you?
Herman
It matters because "Compute is the new Oil." In the twenty-tens, we lived in an era of "Compute Abundance." You could write inefficient code and just throw more cloud instances at it because they were cheap. In the twenty-twenties, we are in an era of "Compute Scarcity." The physical limits of these buildings mean that there is a finite amount of high-end AI compute available. If you can make your model ten percent more efficient, you aren't just saving money; you are potentially making it possible to run your product at all.
Corn
So, "Efficiency" is the new "Scale."
Herman
Exactly. Understanding the "VRAM" constraints Daniel mentioned is a competitive advantage. If you know how to "quantize" a model so it fits into the HBM of a single chip instead of needing two, you have just halved your infrastructure costs and reduced your latency. The "hardware-aware" developer is the most valuable person in tech right now.
Corn
And for the average person, it is a reminder that the "digital" world isn't weightless. Every time you ask an AI to summarize a meeting or generate a cat video, a pump somewhere in Iowa speeds up, a valve opens, and a few more watts of power are pulled from a nuclear reactor or a wind farm.
Herman
It is a very grounded way to look at the world. We are building a global brain, but that brain needs a massive metabolic system to keep it from overheating.
Corn
This has been such a great deep dive, Herman. I feel like I have a much better mental model of what is actually happening inside those "cow sheds" now. It is not just rows of computers; it is a massive, integrated organism.
Herman
It is the most complex machine humanity has ever built, and we are just getting started. The "Stargate" era is just beginning. By twenty thirty, these buildings might not even look like buildings. They might be integrated into the cooling systems of cities, providing heat for homes while they process the world’s data.
Corn
"Data Centers as District Heating." I love that. It turns the "waste" of the AI era into a resource.
Herman
That is the goal. Circularity. But we have a lot of plumbing to do before we get there.
Corn
Well, thank you, Daniel, for that prompt. It really forced us to look under the hood of the internet in twenty twenty-six. And thanks to all of you for listening. We have been doing this for over six hundred episodes now, and it is your curiosity that keeps us going.
Herman
It really is. If you have been enjoying "My Weird Prompts," we would really appreciate it if you could leave us a review on Spotify or Apple Podcasts. It genuinely helps the show reach new people who are as nerdy as we are. It is the "social interconnect" that keeps our show running.
Corn
And if you want to get in touch, you can always find us at our website, myweirdprompts dot com. We have a contact form there, and you can also find our full archive and RSS feed. If you want to email us directly, you can reach the show at show at myweirdprompts dot com.
Herman
We love hearing from you. Whether it is a follow-up question about liquid cooling, a thought on the future of nuclear power, or a completely random idea for a future episode, keep them coming. We read every single one.
Corn
Alright, that is it for today. From our living room in Jerusalem, I am Corn.
Herman
And I am Herman Poppleberry.
Corn
Thanks for listening to My Weird Prompts. We will see you next time.
Herman
Goodbye, everyone! Keep your chips cool and your prompts weird!

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.