So, Herman, I was looking at the workstation in the corner of the living room the other day, and I noticed that even though we are well into March of two thousand twenty-six, you still have that side panel off and a literal desk fan pointed at the motherboard. It made me realize that some habits just never die, even when we are living in the supposed future of computing.
Herman Poppleberry here, and Corn, you know exactly why that fan is there. We are pushing those components to their absolute limit. It is not just a habit; it is a necessity when you are chasing the top one percent of benchmark scores. But it is funny you mention it, because our housemate Daniel actually sent us a prompt that perfectly aligns with my current obsession. He wanted us to go beyond the central processing unit discussion we had recently and really tear into the world of graphics processing unit and random access memory optimization. He is basically asking if the set it and forget it era of hardware is actually over, or if we are just making extra work for ourselves.
It is a perfect follow up to our last deep dive. We spent a lot of time talking about the brain of the computer, the CPU, but if the CPU is the brain, the GPU is the muscle and the RAM is the short term memory that keeps everything from grinding to a halt. Daniel was asking if manual tuning is even worth it anymore. He noted that modern hardware is so smart and boosts so aggressively out of the box that the old days of just sliding a bar to the right might be over. He is wondering if we are just fighting against algorithms that are already better than us.
Daniel is right to be skeptical, but he is also missing the nuance that makes modern tuning in two thousand twenty-six so rewarding. It is true that the era of getting a thirty percent performance boost just by increasing the clock speed is mostly behind us. The manufacturers, whether it is NVIDIA with the Blackwell architecture, AMD with RDNA four, or Intel with their latest Battlemage or Celestial cards, have gotten much better at squeezing every drop of performance out of the silicon before it even leaves the factory. They use sophisticated algorithms to boost clocks based on thermal headroom and power delivery. But that actually makes our job more interesting. Tuning today is not about brute force anymore. It is about surgical precision. It is about efficiency, stability, and finding the specific point where your unique piece of silicon performs best, rather than relying on the safe, generic settings the factory provides.
I like that framing. It is less like putting a turbocharger on a car and more like fine tuning the fuel injection system for a specific altitude. It is about refinement. Before we get into the weeds of RAM, let us start with the GPU. For most people, the graphics card is the most expensive part of their build. It is the heart of any creative workstation or gaming rig. When we talk about GPU tuning, we usually hear terms like core clock, memory clock, and voltage curves. Can you break down how those actually interact in a modern architecture?
Think of the GPU as a massive parallel processor. It is not doing one or two complex tasks like a CPU. It is doing thousands of tiny, simple tasks simultaneously. The core clock is essentially the heartbeat of those processors. It determines how many cycles per second those thousands of small cores can execute. The memory clock is the speed at which the video RAM, or VRAM, can feed data to those cores. If the cores are fast but the memory is slow, the cores sit idle. If the memory is fast but the cores are slow, you have a bottleneck. But the real secret sauce in two thousand twenty-six is the voltage frequency curve, or the V F curve.
Right, so you want them in balance. But the thing that always trips people up is the voltage. Why is the voltage curve so much more important now than it was ten years ago? Back then, we just added more voltage until it crashed or got too hot.
Because of power density and heat. In the old days, we had plenty of thermal headroom. Today, we are packing billions of transistors into tiny areas. We have reached a point where we can't just keep throwing more power at the chip without hitting a power limit or a thermal wall almost instantly. Modern GPUs use that V F curve as a lookup table. Instead of a single clock speed, the GPU has a map. It says, at zero point eight volts, I will run at eighteen hundred megahertz. At zero point nine volts, I will run at two thousand megahertz. And so on, all the way up to the peak voltage, which is usually around one point one volts for modern cards.
And the GPU is constantly bouncing around that curve based on what it is doing, right? It is like a living thing reacting to its environment.
It is looking at the temperature, the power draw, and the load every few milliseconds. If it gets too hot, it moves down the curve to a lower voltage and a lower frequency. This is what we call thermal throttling. The goal of manual tuning today is often to flatten that curve. We want to tell the GPU, hey, you do not need one point one volts to hit twenty-five hundred megahertz. I have tested this specific chip, and it can do twenty-five hundred megahertz at only zero point nine five volts. By doing that, you are essentially tricking the card into staying at its highest boost clock longer because it is generating less heat.
This leads us directly into undervolting, which I think is the most misunderstood part of hardware tuning. To a casual observer, lowering the voltage sounds like you are lowering the power and therefore lowering the performance. It sounds like you are underclocking it. But in reality, it often does the opposite.
It is counterintuitive, but it is one of the most powerful tools we have. When you undervolt, you are essentially increasing the efficiency of the chip. By running a high clock speed at a lower voltage, the chip produces less heat and consumes less power. Because it is cooler, it does not hit those thermal limits that would normally cause it to throttle down. So, while your peak theoretical speed might stay the same, your sustained average speed over an hour of rendering or gaming is actually higher. You get a more consistent, quieter, and cooler experience. It is the difference between a sprinter who gasps for air after ten seconds and a marathon runner who maintains a high speed for hours.
I have seen this on my own machine. Using something like MSI Afterburner, which is still the gold standard even in two thousand twenty-six, you can see that the card wants to pull three hundred fifty watts and get loud. But with a bit of curve manipulation, you can get the same frame rates at two hundred eighty watts. That is seventy watts of heat that is no longer being dumped into the room. In a place like Jerusalem in the summer, that actually matters for your comfort as much as your hardware. It is the difference between needing the air conditioning on full blast or just having a window open.
It really does. And speaking of MSI Afterburner, it is incredible that it is still the primary tool. Even though NVIDIA has integrated some tuning into their own app, the ability to manually edit the V F curve point by point in Afterburner is essential for enthusiasts. You are looking for the sweet spot of your specific silicon lottery. And we should probably explain what that lottery actually is for the listeners who haven't heard the term.
Let us talk about that silicon lottery for a second. We use that term a lot, but for someone who just bought a high end card, it can be frustrating to realize that their card might not perform as well as the one a reviewer had. It feels like you didn't get what you paid for.
It is just the reality of semiconductor manufacturing. When you are carving transistors at a scale of a few nanometers, there are going to be microscopic imperfections. Some chips are just more efficient. They can switch states faster with less electricity. One person might be able to hit three gigahertz on their GPU core, while another person with the exact same model card can only hit two point eight gigahertz before it crashes. Manual tuning is the only way to find out where your specific card sits. If you just leave it on stock settings, the manufacturer has to set the voltage high enough to ensure that even the worst chip they sold is stable. That means if you have a great chip, you are wasting a lot of potential efficiency. You are paying for the safety margin required by the worst chips.
So, if I am a listener and I am sitting here with a brand new GPU, what is the actual real world gain I should expect? Are we talking about a two percent difference or a twenty percent difference? Because if it is only two percent, I might just go outside and enjoy the sun instead.
For most modern cards, if you are looking at raw frame rates, you are probably looking at a five to ten percent gain if you really push the overclock. But if you are looking at it from an undervolting perspective, you might see a fifteen percent reduction in power and noise for the same performance. To me, that is a much bigger win. A five percent frame rate increase is often imperceptible. Going from eighty frames per second to eighty-four frames per second is hard to notice. But going from a fan that sounds like a jet engine to a fan that is a low hum? That changes the entire experience. It makes the computer feel like a high end tool rather than a struggling appliance.
That is a great point. It is about the quality of the performance, not just the quantity. Now, before we move on to RAM, I want to touch on GPU memory overclocking. We talked about the core, but the VRAM is a different beast. I have noticed that with modern GDDR6X or the newer GDDR7 memory we are seeing now, you can sometimes push the clock speed too far and actually lose performance without the system crashing. How does that work? It seems like if it doesn't crash, it should be faster.
That is a fascinating phenomenon called error correction. Modern high speed video memory has built in error checking and correction, or ECC. In the old days, if your memory was too fast, you would see visual artifacts, like little sparkles on the screen, or the computer would just blue screen. Today, the memory controller detects that a bit was flipped or an error occurred because the frequency was too high for the voltage, and it just asks for the data again. It corrects the error on the fly. But that takes time. So, you might increase your memory clock by five hundred megahertz, but because the system is constantly correcting errors, your actual throughput drops. You have to benchmark as you go. If you see your score going down while your clock speed is going up, you have passed the point of stability.
It is like a runner who is trying to go so fast they keep tripping. They are technically moving their legs faster, but they are spending half their time getting back up off the ground. They would actually get to the finish line faster if they just slowed down a tiny bit and stayed on their feet.
That is a perfect analogy. You want to find the speed where the runner is at a full sprint but never loses their footing. And this is why we always recommend using a benchmark like Superposition or the newer Three D Mark Steel Nomad. You run it at stock, then you bump the memory clock by fifty megahertz, run it again, and watch the score. The moment that score plateaus or dips, you back off.
Alright, let us shift gears. We have covered the muscle, let us talk about the short term memory. RAM tuning is something that I think scares people even more than GPU tuning. There are so many numbers. There is the frequency, like six thousand or seven thousand two hundred or even eight thousand megatransfers per second, and then there are these cryptic timings like CL thirty or CL forty. And then there are sub timings that look like a math textbook. Why does RAM matter so much in two thousand twenty-six?
RAM is the ultimate bottleneck for modern high speed CPUs. We talked back in episode six zero six about how RAM still rules the system. The CPU is incredibly fast, but it can only work on data that it has access to. If the data is sitting on your SSD, it is too slow. It has to be loaded into RAM. But even then, the CPU has to wait for the RAM to find the right row and column and send the data back. This delay is what we call latency. In many modern workloads, especially gaming or high speed data compilation, the CPU is actually sitting idle for a significant portion of time just waiting for the RAM to respond. It is like having a world class chef who has to wait for a slow delivery driver to bring the ingredients for every single dish.
So, when someone sees a RAM kit advertised as sixty-four hundred megatransfers per second, that is the frequency, or the bandwidth. That is how much data can be moved once the tap is open. But the timings, like the CAS latency, are how long it takes to open the tap in the first place.
And this is where the big misconception lies. People often think that higher frequency is always better. They will buy an eight thousand megahertz kit and wonder why their computer feels sluggish or why their games are stuttering. It is because as you push the frequency higher, the timings often have to get looser to maintain stability. If you have a very high frequency but very slow timings, you might have great peak bandwidth for moving huge files, but your latency for small, random tasks is actually worse than a slower kit with tight timings. In the world of DDR5, which is what we are all using now, the latency is the killer.
I see this a lot in gaming benchmarks. The average frame rate might be high with high frequency RAM, but the one percent lows, those little stutters that make a game feel choppy, are much better on a system with low latency RAM. Can you explain why those one percent lows are so tied to memory?
Those one percent lows are the heartbeat of a smooth experience. If you are playing a fast paced game and the CPU suddenly needs to load a new texture or calculate a physics interaction, and it has to wait an extra ten nanoseconds for the RAM to respond, you feel that as a tiny hitch. If you can tighten those timings, you smooth out the delivery of frames. It makes the game feel locked in. It is the difference between a video that plays smoothly and one that has tiny, almost imperceptible micro-stutters. For a professional gamer or someone doing high end video editing, those micro-stutters are the enemy.
For most people, the first step in RAM tuning is just enabling a profile in the BIOS. We used to just call it XMP, which stands for Extreme Memory Profile, an Intel standard. But now we have EXPO for AMD systems. Why is it that we even have to do this? Why doesn't RAM just run at its advertised speed the moment you plug it in? It feels like a bit of a scam to buy sixty-four hundred megahertz RAM and have it run at forty-eight hundred by default.
That goes back to the JEDEC standards. JEDEC is the organization that sets the universal standards for memory. They are very conservative because they want to ensure that any stick of RAM will boot in any motherboard. The JEDEC standard for DDR5 might only be forty-eight hundred or fifty-two hundred megahertz. So, when you buy a kit that says six thousand megahertz on the box, that is technically an overclock. The manufacturer has tested those sticks and guaranteed they can hit that speed, but the motherboard won't do it automatically because it doesn't want to risk not booting. You have to go into the BIOS and tell it, hey, use the profile the manufacturer stored on this stick. It is like buying a car that is governed to sixty miles per hour for safety, and you have to flip a switch to let it go to one hundred.
It is amazing how many people leave twenty or thirty percent of their memory performance on the table because they never flipped that one switch in the BIOS. But what about going beyond XMP or EXPO? Daniel's prompt mentioned manual timings. Is that a rabbit hole worth falling down? I have seen the spreadsheets people make for this, and they look terrifying.
It is a deep, deep rabbit hole. If you enjoy the process of tinkering, it is incredibly rewarding. You can find significant gains in productivity tasks like video editing or large code compiles by manually tightening what we call the secondary and tertiary timings. These are the settings that the XMP profile usually leaves on auto. The motherboard will often set these very loosely to be safe. By tightening things like tREFI, which is the refresh interval, or tRFC, you can reduce the overall system latency significantly. But, and this is a big but, it is incredibly time consuming.
I remember watching you do this. You change one number from twenty-four to twenty-two, and then you have to run a stress test for two hours to make sure the computer doesn't crash while you are in the middle of something important. It looked like watching paint dry, but with more anxiety.
It is not for the faint of heart. You need tools like TestMem5 with the Extreme configuration or OCCT to really hammer the RAM. If there is even one tiny error, your whole operating system could eventually get corrupted. That is the danger of RAM tuning versus GPU tuning. If a GPU overclock is unstable, the game crashes or the driver resets. You just restart and try again. If a RAM overclock is unstable, it might silently corrupt your files over the course of a week. It might flip a bit in a system file, and suddenly your Windows installation just won't boot anymore. You can lose data without even knowing it.
That is a vital warning. With RAM, you have to be absolutely sure of your stability. But let us talk about the platforms. We are in two thousand twenty-six. We have AMD AM5 and Intel LGA eighteen fifty-one. How do these platforms handle RAM differently? I know AMD has been very focused on the relationship between the RAM speed and the internal clock of the CPU, the Infinity Fabric.
That is the key for AMD users. On the AM5 platform, there is a sweet spot. Usually, it is around six thousand or six thousand four hundred megatransfers per second. This is because you want the memory clock to be in a one to one ratio with the memory controller, which AMD calls UCLK. If you go too high, say to eight thousand megahertz, the system has to switch to a two to one ratio, which actually introduces a massive latency penalty. So, a person with six thousand megahertz RAM and a one to one ratio will actually have a faster computer in most tasks than someone with eight thousand megahertz RAM and a two to one ratio. It is all about that internal synchronization.
Intel is a bit more flexible with their gear ratios, but they also hit a wall where the motherboard itself becomes the limiting factor. I have heard that the number of RAM slots on the board actually matters for how fast you can go.
When you are pushing six or seven gigahertz through traces on a circuit board, the physical layout of the motherboard matters. This is why you see extreme overclockers using motherboards with only two RAM slots, like the ASUS Apex or the MSI Unify-X. It shortens the distance and reduces electrical interference. For the average user with a four slot board, you are probably not going to hit those crazy high speeds anyway. And in two thousand twenty-six, we are also seeing CUDIMMs, which are a new type of RAM stick that has its own clock driver on the stick itself. That is helping Intel systems reach even higher speeds, but it adds another layer of complexity to tuning.
So, let us try to synthesize this for someone who wants a balanced, optimized system. We have the CPU, which we covered before, we have the GPU, and we have the RAM. It feels like the modern approach is efficiency first. You want your CPU and GPU to be undervolted to stay cool and maintain high boost clocks, and you want your RAM to be at the highest frequency that allows for a one to one ratio and tight timings.
That is the gold standard for two thousand twenty-six. It is about creating a balanced ecosystem. If you push the GPU so hard that it is dumping four hundred watts of heat into the case, your RAM is going to get hot. And when DDR5 RAM gets hot, it becomes unstable. People often forget that. They will spend all day tuning their RAM, and it passes the tests. Then they start a heavy game, the GPU heats up the whole case, the RAM temperature climbs to sixty degrees celsius, and suddenly the system crashes. The heat from one component affects the stability of the others.
I had that exact problem! I had to add a small fan just for the RAM because the heat from the graphics card was rising right into the DIMM slots. It is all connected. You can't tune one in a vacuum. It is like trying to keep one room in a house cold while the oven is on in the next room with the door open.
You really can't. And that leads to the practical side of this. If you are going to do this, do it in stages. Get your RAM stable first with XMP or EXPO. That is your foundation. Then undervolt your GPU to find that efficiency sweet spot. Then, if you are feeling brave, start tightening those RAM timings. If you change everything at once and the computer crashes, you have no idea which component caused it. You will be chasing your tail for days.
That is the best advice for any kind of troubleshooting or tuning. One variable at a time. Now, looking at the big picture, Daniel asked about the impact on real world performance. If I spend a weekend doing all of this, what am I actually getting? Is it the difference between a game being playable or not? Or is it just for the satisfaction of seeing a higher number in a benchmark?
Almost never is it the difference between playable and unplayable. If a game is unplayable at stock settings, no amount of tuning is going to make it smooth. You might get a ten percent boost in your average frame rate and maybe a fifteen percent boost in your one percent lows. In a professional application, like a long video render that takes four hours, you might save twenty minutes. For some people, that is huge. If you are a professional editor, twenty minutes saved every day adds up to a lot of time over a year. But for a casual gamer, it is not worth the risk of instability if you don't enjoy the process. But for me, the real gain is the noise and the thermals. Having a system that is ten percent faster but also ten decibels quieter? That is the real magic. That is what makes the machine feel like it is working with you rather than against you.
It is about the satisfaction of knowing your hardware is doing exactly what it was designed to do, without the artificial limits set by manufacturers who have to account for the lowest common denominator. There is a certain conservative principle there, isn't there? Taking personal responsibility for your tools. Not just accepting what is given to you, but understanding how it works and making it your own. It is a form of digital craftsmanship.
I think so. It is a form of stewardship. You have invested a significant amount of money in these components. Why wouldn't you want to understand them and ensure they are operating at their peak efficiency? It is the same reason you might tune a tractor or optimize the irrigation on a farm. It is about getting the most out of what you have. It is about respect for the engineering that went into these chips.
And it prevents waste. If you can make a three year old GPU perform like a two year old GPU just by tuning it and keeping it cool, you might delay an expensive upgrade for another year. That is just good sense, especially with how prices have been lately.
And it is also just fun. There is a real sense of accomplishment when you finally hit that perfect balance. You run a benchmark, and you see that your system is in the top ninety-nine percentile for your hardware configuration. It is a little badge of honor. It says that you know your machine better than the person who built it.
I think we have given Daniel a lot to think about. To recap, don't just chase the highest numbers on the box. Focus on the voltage curves for your GPU to find that efficiency sweet spot. For RAM, make sure you are in the right gear ratio for your CPU and prioritize low latency over raw megahertz if you have to choose. And for heaven's sake, make sure you have enough airflow to handle the heat. That desk fan of yours might look silly, Herman, but I guess it has a purpose.
And test, test, test. Use tools like OCCT for the whole system, MSI Afterburner for the GPU, and TestMem5 for the RAM. If you don't test for stability, you aren't tuning, you are just gambling with your data. And nobody wants to lose a week of work because they wanted an extra three frames per second in a video game.
Well said. This has been a great deep dive. It actually makes me want to go back and look at my own workstation and see if I can squeeze a bit more efficiency out of it. Maybe I will finally put that side panel back on if I can get the temps down through undervolting.
We will see about that. I think you like the look of the open case. It makes you look like a mad scientist, or at least someone who is very busy doing something important.
Or just a sloth who is too lazy to find the screws. One of the two.
Probably both.
Well, if you have been listening to this and you are looking at your own PC with a newfound desire to tinker, we hope this gives you a good starting point. There is a wealth of knowledge out there on the specifics for every card and every RAM kit.
Definitely check out the communities on places like Overclock dot net or even the hardware subreddits. There are people who spend their whole lives documenting the specific behaviors of different silicon batches. It is a great resource. You don't have to do this alone.
And if you found this technical deep dive useful, we have a whole archive of similar explorations. You might want to check out episode six eighty-four where we talked about the history of overclocking and how we got to this point, or episode six thirty-seven where we broke down the hidden science of motherboards, which is the foundation for everything we talked about today. The motherboard is the unsung hero that facilitates all this power delivery.
Yeah, it really is. It is worth a listen if you want to understand why some boards cost five hundred dollars and others cost one hundred. It is all in the voltage regulator modules and the PCB layers.
Well, that is about all the time we have for today. We want to thank Daniel for sending in this prompt. It was a great excuse for Herman to justify that desk fan in the living room for another few months.
It is a functional piece of art, Corn! It represents the intersection of human ingenuity and thermal dynamics!
Sure it is. If you are enjoying My Weird Prompts, we would really appreciate it if you could leave a review on Spotify or whatever podcast app you use. It genuinely helps the show grow and helps other people find these deep dives into the guts of our modern world.
It really does. And remember, you can find our full archive and all the ways to subscribe at myweirdprompts dot com. We have an RSS feed there for the purists, and if you are a Telegram user, just search for My Weird Prompts to join our channel and get notified the moment a new episode drops.
Thanks for joining us in our house in Jerusalem. We will be back next time with another topic from the mind of Daniel or one of our other listeners.
Until then, keep those temps low and those clocks stable.
This has been My Weird Prompts. I am Corn.
And I am Herman Poppleberry. We will talk to you soon.
Take care, everyone.
Goodbye.