You ever hit that export button in Premiere or Resolve and just feel like you are playing a high-stakes game of chance with your file size? You have the client waiting, the deadline is looming, and you are staring at a drop-down menu of acronyms that look like a cat walked across a keyboard.
It is the great black box of modern production. Most editors just find a preset that says high quality for web and pray the colors do not shift or the audio does not turn into a watery mess. But there is a massive amount of math and engineering happening under the hood that determines whether your work actually looks and sounds the way you intended once it hits a viewer's screen or headphones.
Today's prompt from Daniel is about pulling back the curtain on that black box. He is asking us for a deep dive into the world of codecs to help audio and video editors make better decisions. We are looking at everything from the evolution of Bluetooth audio standards like S-B-C and A-A-C to the messy distinction between a codec and a wrapper.
This is perfect timing because the landscape is shifting under our feet right now. As of late March twenty twenty-six, we are seeing a massive move toward standardization and high-efficiency lossless transmission that is finally going to kill off some of those annoying proprietary licensing fees we have lived with for decades.
I hope so, because I am tired of my expensive headphones sounding like a tin can just because I switched from a wired connection to Bluetooth. But before we get into the heavy technical weeds, we should probably clear up the most basic point of confusion. People use the terms codec and container interchangeably all the time. I will hear someone say, just send me the M-P-four codec, and it drives me a little crazy.
It is a fundamental misunderstanding. Think of it this way. The container, or the wrapper, is the shipping box. That is your dot M-P-four, your dot M-O-V, or your dot M-K-V. It is a package that holds different streams of data together. Inside that box, you have the video stream, the audio stream, maybe some metadata or subtitle tracks.
Right, and the codec is what is actually inside the box. It is the specific algorithm used to compress and decompress that data. Coder-decoder. Codec.
The codec is the actual essence of the file. You could have an M-P-four wrapper that contains H-dot-two-six-four video, or it could contain H-dot-two-six-five, or even the newer A-V-one. The wrapper just tells the player how to synchronize those streams. If you try to open a file and get an unsupported format error, half the time it is not because the player does not recognize the box, it is because it does not have the key to unlock the codec inside.
That is a big deal for editors because if you choose the wrong wrapper-codec combination, you might end up with a file that plays fine on your machine but breaks the moment you upload it to a specific platform or send it to a colorist.
We have all been there. But Daniel specifically wanted to talk about Bluetooth audio first. This is where the confusion between software and hardware really comes to a head. For the longest time, the baseline for Bluetooth was S-B-C, or Subband Coding.
S-B-C is the one that everyone loves to hate, right? It was part of the original A-two-D-P profile back in two thousand three. That is over twenty years ago.
It was designed for a different era of technology. Back then, mobile processors were incredibly weak and battery life was a precious resource. S-B-C was built to be computationally cheap. It does not require much processing power to encode or decode, but the trade-off is quality. It prioritizes a stable connection over high fidelity, which is why it often sounds muddy or has those weird digital artifacts in the high frequencies.
I always noticed it on cymbals or high-hats. They just sound like static. But if S-B-C is so old, why are we still using it?
Because it is the mandatory baseline. Every Bluetooth audio device on the planet has to support S-B-C to ensure they can all talk to each other. It is the lowest common denominator. But then we saw the rise of things like A-A-C, which is the Apple standard, and L-D-A-C from Sony.
A-A-C is interesting because I have heard it sounds great on an iPhone but can be a disaster on some Android phones. Why is that?
That comes down to the encoder implementation. A-A-C is a very efficient codec, but it is also very complex. It uses something called psychoacoustic modeling. Essentially, it uses math to figure out what parts of the audio the human ear cannot actually hear and then it just throws that data away.
Masking thresholds.
If you have a loud sound at one frequency, it might drown out a quieter sound at a nearby frequency. The codec identifies that and says, we do not need to waste bits on the quiet sound. But doing that well in real-time requires a lot of processing. Apple spent years optimizing their A-A-C encoder to work perfectly with their hardware. On the Android side, because there are so many different chips and manufacturers, the quality of the A-A-C encoder can vary wildly. Sometimes it is great, sometimes it is worse than S-B-C because it is trying to do too much with too little power.
So is a codec just a piece of software then? If I have a phone from five years ago, can I just download a new codec and suddenly have high-fidelity audio?
This is where it gets tricky. On a technical level, a codec is just a mathematical algorithm. It is code. But in the real world, especially with mobile devices and Bluetooth, those algorithms are often baked into the hardware.
You mean the D-S-P, the Digital Signal Processor.
Most modern codecs, especially high-end ones like L-H-D-C or the new V-V-C for video, require significant processing power. If you ran those entirely on your main phone C-P-U, your battery would die in an hour. So, manufacturers use dedicated D-S-P chips, like the Cadence Tensilica HiFi five, to handle the heavy lifting.
So if my phone's D-S-P does not have the physical instructions to decode L-D-A-C at nine hundred ninety kilobits per second, a software update might not be enough to fix it.
It might be able to do it in software, but it would be incredibly inefficient. This is why we see these hardware cycles. Your phone might support Bluetooth five-point-three, but if the silicon does not have the hardware acceleration for a specific codec, you are out of luck. We talked about this a bit in episode fifteen forty-one when we were looking at why mobile chips are actually outperforming some P-C chips in specific tasks. It is all about those dedicated engines.
That brings up a great point from Daniel's prompt. Can we develop new, more efficient codecs even if the underlying Bluetooth protocol stays the same? Like, if the pipe itself does not get any bigger, can we just pack the data better?
We have been doing that for years. That is how Qualcomm and Sony got their proprietary codecs into the ecosystem. They used something called Vendor-Specific codec fields in the A-two-D-P profile. They basically said, hey, if both the phone and the headphones are our brand, we are going to use this secret handshake and send data in a way the standard does not recognize.
But that feels like a workaround. It is not a real standard.
It was a workaround for a long time, but that changed with Bluetooth five-point-two and the introduction of L-E Audio. L-E Audio introduced a new codec called L-C-three, or Low Complexity Communication Codec.
I have been reading about L-C-three. The Bluetooth S-I-G, the standards group, is claiming it is the successor to S-B-C.
It is a massive leap. According to reports from the S-I-G just a few weeks ago, in mid-March twenty twenty-six, L-C-three provides equal or even better quality than S-B-C while using only half the bitrate. That means better battery life and lower latency without sacrificing sound quality.
That is huge for gaming. I know Daniel is always looking for ways to reduce lag. I saw a report on March nineteenth that some ultra-low latency projects are trying to get controller and audio lag down to one millisecond by the middle of this year.
We are getting there. But the biggest news happened on March eleventh. The Bluetooth S-I-G announced they are working to standardize lossless and spatial audio natively within the L-E Audio framework.
Wait, so we might not need to pay the Qualcomm tax for aptX Lossless or the Sony tax for L-D-A-C?
That is the goal. They want to eliminate the need for these proprietary, licensed codecs by making high-fidelity lossless audio a part of the standard itself. If that happens, you could buy any pair of high-end headphones and know they will work at full quality with any modern phone. It levels the playing field for smaller manufacturers who cannot afford those massive licensing fees.
That is a win for everyone. But it still feels like we are fighting against the physical limits of the radio. You can only shove so much data through a two-point-four gigahertz signal before it starts to fall apart.
You are right. That is what we call the P-H-Y, or the Physical Layer. Even the best codec in the world is limited by the hardware's radio capacity. There is an update coming, likely in late twenty twenty-six, called High Data Throughput, or H-D-T. They are looking to quadruple Bluetooth data rates to eight megabits per second. But that is a hardware change. You cannot firmware-update your way to that. You will need new chips and new radios.
So for an editor sitting at their desk, if they are using Bluetooth headphones to monitor a mix, they are still basically at the mercy of their current hardware's P-H-Y and whatever handshake their Mac or P-C decides to make with their headphones.
And this is why I always tell editors: do not mix on Bluetooth if you can avoid it. Not just because of the quality, but because of the re-encoding process.
Explain that, because I think a lot of people miss this. If I have a high-quality lossless file on my computer and I play it over Bluetooth, it is not just sending that file to my ears.
No, your computer has to decode that lossless file, then re-encode it in real-time using whatever Bluetooth codec you are using, like A-A-C or S-B-C, and then send it. Every time you re-encode a lossy format, you are losing data. It is like taking a photo of a photo.
It is the digital sandwich problem we talked about in episode twelve eighteen. You are just adding layers of compression on top of each other. By the time it hits your eardrums, it has been through the wringer.
It is even worse for video editors because they are dealing with massive bitrates and complex temporal compression. Let's shift over to the video side of Daniel's prompt. This is where the distinction between codec and wrapper becomes a matter of life and death for a project.
Or at least a matter of whether your computer starts smoking.
Pretty much. Think about H-dot-two-six-four versus ProRes. These are both codecs, but they serve completely different purposes. H-dot-two-six-four is a delivery codec. It is designed to make the file as small as possible so it can be streamed over the internet. To do that, it uses inter-frame compression.
I love this part. It is basically the codec being lazy, right?
In a very smart way. Instead of saving every single frame of video as a full picture, it saves one full frame, called an I-frame, and then for the next few frames, it only saves the parts of the image that changed. If you are filming a person talking in front of a static wall, the codec only updates the pixels around the person's mouth and eyes. The wall stays the same in the data.
That is great for watching Netflix, but it is a nightmare for an editor. If I try to scrub through a timeline with H-dot-two-six-four footage, my computer has to work overtime just to reconstruct what frame number forty-two looks like because forty-two is not a full picture. It is just a list of changes from frame thirty.
That is why we use mezzanine codecs, or intermediate codecs, like ProRes or D-N-x-H-D. These use intra-frame compression. Every single frame is a complete picture. The files are ten times larger, but they are incredibly easy for your computer to read because it does not have to do any math to figure out what a frame looks like. It just opens the picture and moves on.
So when an editor sees a dot M-O-V file, they might assume it is a high-quality ProRes file, but it could actually be a highly compressed H-dot-two-six-four stream hidden inside that M-O-V wrapper.
Precisely. And that is where people get burned. They see the wrapper and think they are safe, then they wonder why their playback is stuttering. This is why tools like MediaInfo are so essential. You can drop any file into it and it will tell you exactly what is living inside that container. It will tell you the bit depth, the chroma subsampling, and the specific codec profile.
We should talk about bit depth for a second. Because I keep seeing ten-bit versus eight-bit being thrown around in marketing, and I think a lot of people think it is just about color, but it affects the codec's efficiency too, doesn't it?
It affects everything. Eight-bit video can display about sixteen million colors. Ten-bit can display over a billion. For an editor, ten-bit is the difference between a smooth blue sky and a sky that has those ugly digital bands across it. But ten-bit also requires more sophisticated codecs like H-dot-two-six-five, also known as H-E-V-C.
And H-E-V-C is the one that really pushed the need for hardware acceleration. I remember when it first came out, even high-end P-Cs would choke on it.
It was a huge jump in complexity over H-dot-two-six-four. But now, as of March twenty twenty-six, we are moving into the era of V-V-C, or Versatile Video Coding. This is also known as H-dot-two-six-six.
H-dot-two-six-six. We are just running out of numbers at this point.
The efficiency is staggering. V-V-C is delivering forty to fifty percent better compression than H-E-V-C at the same visual quality. We are talking about streaming eight-K video with the same bandwidth we currently use for four-K. But again, the hardware dependency is the bottleneck. Hardware decoding for V-V-C is only just now starting to show up in flagship smart TVs and high-end mobile chips.
So if I am an editor and I decide to render my final project in V-V-C because I want that forty percent file size saving, I might be sending my client a file they literally cannot play.
That is the risk. You have to know your target hardware. This is why H-dot-two-six-four is still the king of the internet despite being technically inferior to almost everything that came after it. It is the S-B-C of video. It works on everything.
What about A-V-one? That was supposed to be the great royalty-free hope.
A-V-one is doing very well. It was created by the Alliance for Open Media, which includes giants like Google, Amazon, and Netflix. They wanted a codec that was as good as H-E-V-C but did not require paying royalties to a patent pool. It offers about thirty percent better compression than H-E-V-C. The big shift recently has been N-P-U integration.
The Neural Processing Unit. We talked about this in episode fifteen forty-one. These chips are designed for A-I tasks, but they are being used for video now too.
They are game changers for real-time encoding. Instead of a fixed mathematical formula, these N-P-U-accelerated encoders can use machine learning to analyze a frame and decide exactly where to spend the bit budget. They can identify that a face is more important than a blurry background and allocate more data to the features that matter to the human eye.
That sounds like a more advanced version of the psychoacoustic masking we talked about for audio.
It is exactly that, but for your eyes. And because it is happening on the N-P-U, it is incredibly fast. This is how we are getting high-quality live streaming from mobile phones now. The phone's N-P-U is doing a level of analysis that used to require a massive server farm.
So, looking at Daniel's question about whether new codecs can be developed on static protocols. For video, the protocol is basically just the internet or the physical cable. We can always invent better math. The limitation is always going to be: can the device on the other end do that math fast enough to show the picture?
That is the heart of it. We are moving from a world of fixed standards to a world of intelligent streams. In the future, the codec might actually adapt to your specific screen or even your specific eyesight in real-time.
That sounds a little sci-fi, but I can see it. If the N-P-U knows I am watching on a five-inch screen, it does not need to send data for details I literally cannot see.
It is already happening with variable bitrate streaming, but it is going to get much more granular. But let's bring this back to practical advice for the editors listening. If you are sitting in front of your N-L-E right now, what should you actually do differently?
First off, stop picking high quality presets blindly. Look at what is actually happening. If you are exporting for a client review and you want it to look good but stay small, H-dot-two-six-five is usually your best bet now, provided they have a device from the last three or four years. But if you are sending something to a colorist or an A-E, you have to stay in that mezzanine world. ProRes four-two-two or four-four-four-four. Do not let a wrapper fool you.
And check your audio settings. Most video exports default to A-A-C at one hundred twenty-eight or one hundred ninety-two kilobits per second. If you have spent hours on a sound mix, that is a travesty. Push it up to three hundred twenty or, better yet, use uncompressed P-C-M audio if the wrapper supports it. An M-O-V wrapper will happily hold uncompressed audio alongside your video.
And for the love of all that is holy, use MediaInfo. It is a free tool. If you are ever unsure why a file is behaving badly, drop it in there. It will tell you if you have a variable frame rate issue, which is a common nightmare with footage recorded on phones.
Variable frame rate is the hidden killer of sync. Most editors assume twenty-four frames per second means exactly twenty-four frames every single second. But phones will often dip to twenty-two or twenty-three frames if the processor gets too hot or the lighting changes. A good codec will handle that for playback, but an N-L-E will lose its mind trying to keep the audio synced up.
We covered some of those hardware secrets back in episode eleven zero four. It is wild how much of our digital life is just a series of clever tricks to hide the fact that our hardware is constantly struggling to keep up.
It is all smoke and mirrors, Corn. But the more you understand how the mirrors are angled, the better your work is going to look.
I think we have given Daniel enough to chew on for a while. It is a lot of information, but the takeaway is clear: the codec is the gatekeeper of your quality. The wrapper is just the envelope.
We are moving into a very cool era where those gatekeepers are getting much smarter and, hopefully, much cheaper.
I am looking forward to that lossless Bluetooth future. I want to finally hear those cymbals without feeling like I am listening to a radio station in a thunderstorm.
It is coming. Faster than you think.
Well, I think that is a good place to wrap this one up. We could talk about sub-pixel motion estimation for another three hours, but I think I see Herman's eyes starting to glow with a dangerous level of nerd energy.
I was just about to bring up the discrete cosine transform!
No! Save it for the next one, Herman Poppleberry. We have to keep some mystery alive.
Fine, fine. But we are definitely doing a deep dive on transform coding soon.
We will see about that. Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes. And a huge thanks to Modal for providing the G-P-U credits that power the infrastructure of this show. They make it possible for us to dive into these technical rabbit holes every week.
If you found this useful, or if you are now questioning every export setting you have ever used, we would love to hear from you. You can find us at myweirdprompts dot com for the full archive of over fifteen hundred episodes, including the ones we mentioned today about silicon physics and the N-P-U revolution.
You can also search for My Weird Prompts on Telegram to get notified the second a new episode drops. We know your time is valuable, so we try to make sure every minute counts.
This has been My Weird Prompts. Thanks for listening, and we will catch you in the next one.
Stay curious. And maybe check those export settings one more time. Goodbye!
Goodbye!