So Daniel sent us this one as a follow-up to the Voynich manuscript episode — and honestly, the through-line is elegant. We spent that episode sitting with this humbling idea: that even brilliant cryptanalysts, working at the absolute edge of what their era could conceive, can stare at a cipher for centuries and get nowhere. And Daniel wants to use that as a springboard into modern cryptography. Specifically: quantum computers and the threat they pose to RSA and elliptic-curve encryption, NIST's finalization of post-quantum cryptography standards last year, and what all of this means for the field right now. Lattice-based cryptography, homomorphic encryption, zero-knowledge proofs — the whole frontier.
And it's a genuinely interesting moment to be asking this. The Voynich thing is a good frame because the failure mode there was conceptual — the right tools didn't exist yet. What's happening now is almost the inverse. We have tools that are about to exist, and we're scrambling to restructure the entire cryptographic foundation of the internet before they arrive.
Before they arrive. That's the part that keeps me awake. Well, not really — I sleep constantly — but hypothetically.
Right, so let me back up slightly on the threat model, because I think a lot of coverage oversimplifies it. The thing people say is "quantum computers will break RSA." That's true but incomplete. The mechanism is Shor's algorithm, which Peter Shor published in 1994. It runs on a quantum computer and can factor large integers in polynomial time. RSA's entire security assumption rests on the fact that factoring a two-thousand-and-forty-eight-bit number is computationally intractable for classical machines — we're talking longer than the age of the universe. Shor's algorithm, on a sufficiently powerful quantum computer, collapses that to hours or less.
And elliptic-curve encryption has the same problem?
Same problem, different mathematical trapdoor. Elliptic-curve cryptography relies on the discrete logarithm problem over elliptic curves — also solvable by Shor's algorithm on a quantum machine. So both RSA and elliptic-curve, which together underpin basically all public-key infrastructure on the internet — TLS, HTTPS, SSH, certificate authorities — are vulnerable to the same class of attack.
So the question is just: when does a quantum computer powerful enough to run Shor's algorithm actually exist?
That's the crux of it, and there's genuine uncertainty here. Right now, the most advanced quantum systems — IBM's Heron processors, Google's Willow chip announced late last year — are operating in ranges of hundreds to low thousands of physical qubits. But to run Shor's algorithm against a two-thousand-and-forty-eight-bit RSA key, you need something in the range of four thousand logical qubits. And logical qubits are not the same as physical qubits. Because of error rates, you need somewhere between a thousand and ten thousand physical qubits per logical qubit depending on the error correction scheme. So we're talking millions of high-fidelity physical qubits. That's not a near-term problem. Most serious estimates put cryptographically relevant quantum computing at ten to twenty years out.
Which sounds like a lot of runway until you think about the harvest-now-decrypt-later problem.
Which is the thing that makes this urgent right now. The attack model isn't "wait until quantum computers exist, then break encryption." It's "record encrypted traffic today, store it, decrypt it when the hardware catches up." Nation-state adversaries are almost certainly doing this already. Anything classified, anything sensitive that's transmitted over the internet today could potentially be decrypted in fifteen years. So the urgency isn't about the quantum computer existing — it's about the data that's being collected right now.
And that's why NIST didn't wait. They started the post-quantum standardization process back in 2016.
2016, right. They put out a call for candidate algorithms, received eighty-two submissions from research teams around the world, and spent the next eight years in evaluation rounds — mathematical analysis, cryptanalysis attempts, performance benchmarking. And then in August 2024, they finalized the first three post-quantum cryptography standards. FIPS two-oh-three, FIPS two-oh-four, and FIPS two-oh-five. The primary ones being ML-KEM, which is a key encapsulation mechanism based on the CRYSTALS-Kyber scheme, and ML-DSA, a digital signature algorithm based on CRYSTALS-Dilithium. Both are lattice-based.
Okay, so lattice-based cryptography. This is the heart of where the field has moved. What actually is a lattice in this context?
So a lattice is a regular grid of points in high-dimensional space — think of it like an infinitely extending checkerboard, but in five hundred dimensions instead of two. The hard problem at the core of lattice cryptography is called the Learning With Errors problem, or LWE. The basic idea: you take a secret vector, you multiply it by a public matrix, and you add a small random error term. Given the result and the public matrix, recovering the secret vector is believed to be computationally hard — even for quantum computers. There's no known quantum algorithm that gives a meaningful speedup on LWE. That's the key property.
And "believed to be hard" is doing a lot of work in that sentence.
It is, and I want to be honest about that. We don't have a proof that LWE is hard. We have strong evidence — decades of cryptanalysis, connections to worst-case lattice problems that are well-studied — but it's a computational assumption, not a theorem. The same is technically true of RSA, though. We don't have a proof that factoring is hard. We just have a few centuries of people failing to do it efficiently.
The Voynich problem again. Absence of a solution isn't a proof of impossibility.
Which is why the NIST process included multiple rounds of cryptanalysis, and why they standardized algorithms from different mathematical families as backups. The fourth standard they finalized, FIPS two-oh-six, is SLH-DSA, which is based on hash functions rather than lattices. If lattice-based cryptography turns out to have a weakness we haven't found yet, hash-based signatures are a fallback.
How painful is the migration going to be? Because "the entire public-key infrastructure of the internet needs to be replaced" is not a small sentence.
It's a genuinely large undertaking. The comparison people reach for is Y2K, but I think that undersells it. Y2K was a finite bug with a known deadline. This is a cryptographic infrastructure replacement where the timeline is probabilistic and the systems involved are extraordinarily heterogeneous. You've got web browsers, certificate authorities, VPN systems, embedded devices in industrial control systems, hardware security modules with ten-year lifespans. Some of those can be updated with a software patch. Some of them require physical hardware replacement. And there are interoperability issues — during the transition period, systems need to support both classical and post-quantum algorithms simultaneously, which is what's called a hybrid approach.
How are the major players actually moving on this? Is it happening?
It's starting. Google has been running experiments with post-quantum key exchange in Chrome since around 2023 — they were using a hybrid of X25519 and CRYSTALS-Kyber before the standards were even finalized. Cloudflare has been testing similar hybrid schemes. Apple announced post-quantum protections in iMessage with their PQ3 protocol. The US government issued a directive requiring federal agencies to inventory their cryptographic assets and begin migration planning. But we're still in the early stages. Most enterprise systems haven't started. Most legacy infrastructure hasn't been touched.
And there's a particular irony here that I keep thinking about. The Voynich manuscript stumped cryptanalysts because they lacked the conceptual tools. Modern cryptography is built on the assumption that certain mathematical problems are hard. And the thing that might break it isn't a cleverer algorithm — it's a different kind of machine.
That's a nice way to put it. The threat isn't that someone got smarter. It's that the computational substrate changed. And that's a different kind of vulnerability — one that purely mathematical analysis can't catch.
Alright, let's pivot because I want to get into homomorphic encryption and zero-knowledge proofs, because these feel like a different category of thing. Less "defend what we have" and more "enable things that were previously impossible."
Very different category. So homomorphic encryption — the core idea is that you can perform computations on encrypted data without ever decrypting it. The server doing the computation sees only ciphertext. It never sees the underlying values. And the result, when you decrypt it, is the correct answer as if you'd done the computation in the clear.
Which sounds like magic. And probably is magic, so what's the catch?
The catch is performance. Fully homomorphic encryption, which allows arbitrary computations on encrypted data, is still extremely slow. We're talking overhead factors of ten thousand to one hundred thousand compared to plaintext computation, depending on the operation and the scheme. IBM has a homomorphic encryption library, Microsoft has SEAL, there are others — but the practical applications are still limited to scenarios where you can tolerate that overhead or where the privacy guarantee is worth the cost.
What are the actual use cases that are compelling enough to absorb that cost?
Medical data is the canonical one. Imagine a hospital wants to run a machine learning model on patient records to predict disease risk. They could send encrypted patient data to an external compute provider, the provider runs the model on the ciphertext, sends back an encrypted result, and the hospital decrypts it. The compute provider never sees a single patient record. That's genuinely transformative for healthcare data sharing. Financial fraud detection across institutions is another — banks could jointly compute on encrypted transaction data without exposing their customers' information to each other. Genomics is a big one. Genome data is extraordinarily sensitive and extraordinarily useful for research simultaneously.
And the performance problem is improving?
Meaningfully. The field has advanced dramatically since Craig Gentry's original fully homomorphic encryption construction in 2009, which was a proof of concept that was completely impractical. The CKKS scheme, developed around 2017, handles approximate arithmetic — which is what you need for machine learning — much more efficiently. There are hardware accelerators being designed specifically for homomorphic encryption operations. I'm not going to claim it's around the corner for general use, but the trajectory is real.
Zero-knowledge proofs feel philosophically weirder to me. Like, the concept itself is strange.
The concept is genuinely strange. A zero-knowledge proof lets you convince someone that a statement is true without revealing any information about why it's true. The classic illustration is proving you know the solution to a Sudoku puzzle without showing them the solution. In a cryptographic context — proving you know a password without transmitting the password. Proving your age is over eighteen without revealing your actual birthdate. Proving you have sufficient funds for a transaction without revealing your account balance.
And these are actually deployed in real systems?
They are. The most mature deployment is in privacy-preserving cryptocurrencies — Zcash uses zk-SNARKs, which stands for zero-knowledge Succinct Non-interactive Arguments of Knowledge. But the applications are expanding well beyond crypto. Ethereum has been building out rollup systems that use zero-knowledge proofs to compress transaction verification — you can verify thousands of transactions with a single compact proof, which is a scalability solution as much as a privacy one. And there's real momentum around using ZK proofs for identity and credential verification. The idea that you could prove to a website that you're a licensed driver in a given jurisdiction without handing over your actual driver's license data — that's technically achievable now.
The privacy implications of that are enormous. Like, the entire data-broker economy assumes that you have to reveal information to prove things about yourself.
And ZK proofs crack that assumption open. The question is whether the infrastructure and the user experience can get to the point where it's actually adopted. The cryptography works. The deployment problem is the hard part. And there's a quantum consideration here too — most current ZK proof systems rely on elliptic-curve cryptography under the hood, which means they have the same post-quantum vulnerability as everything else. There's active research on post-quantum ZK proofs, but they're less mature.
By the way — today's script is courtesy of Claude Sonnet 4.6, which I find appropriate given that we're discussing cryptographic infrastructure. An AI writing about the systems that keep AI communications secure.
There's a recursive quality to that, yes. Although I'm not sure Claude has strong opinions about lattice problems.
Probably just learning with errors of its own. Okay, I want to zoom out for a second because I think there's a bigger picture here about what this inflection point means for the field of cryptography as a profession and as a discipline.
It's actually a remarkable moment. Cryptography for most of its history was a niche specialty — mathematicians and intelligence agencies. The public-key revolution in the late seventies, Diffie-Hellman and RSA, democratized it somewhat, but it was still largely invisible infrastructure. What's happening now is that cryptography is becoming a policy concern, a geopolitical concern, a boardroom concern. The NIST standardization process involved governments and companies from around the world. The migration decisions being made in the next five years will shape the security posture of the global internet for decades.
And there's a geopolitical dimension that doesn't get discussed enough. The countries that achieve cryptographically relevant quantum computing first — or that have already harvested encrypted data in anticipation of it — have a significant intelligence advantage.
Significant is an understatement. If a nation-state can decrypt fifteen years of diplomatic communications, financial transactions, military intelligence traffic — that's a transformative intelligence capability. And the harvest-now-decrypt-later problem means this isn't hypothetical. The data is being collected. The only question is whether the decryption capability arrives.
Which is also why the NIST process being open and international matters. The alternative is fragmented standards — different post-quantum schemes adopted by different countries or blocs, which creates interoperability problems and potentially introduces backdoors.
The backdoor concern is real and has history. The DUAL EC DRBG situation — the random number generator that NIST standardized in 2006 that turned out to have what appeared to be an NSA-inserted backdoor — that's a scar on the standardization process. The post-quantum process was designed with much more transparency, multiple evaluation rounds, public comment periods, international participation. Whether that's sufficient reassurance is a judgment call.
One thing I find genuinely underappreciated in the public conversation is the distinction between the algorithms being quantum-resistant and the implementations being secure. You can have a mathematically sound post-quantum algorithm and still have a catastrophically vulnerable system if the implementation has side-channel leaks.
This is so important. Side-channel attacks — timing attacks, power analysis, cache-timing — these exploit the physical implementation of a cryptographic algorithm rather than the mathematics. There were papers during the NIST evaluation process showing that early implementations of CRYSTALS-Kyber were vulnerable to timing side-channels. The algorithm itself was fine; the code was leaking information through execution timing. Post-quantum migration isn't just "swap in the new algorithm." It requires careful, audited implementations, hardware support, constant-time coding practices.
And this is where the field of cryptographic engineering becomes as important as the mathematics. It's not enough to prove the algorithm is hard to break. You have to build things that are hard to break in practice.
The gap between theoretical security and deployed security is where most real-world cryptographic failures live. Not Shor's algorithm breaking RSA in the abstract — it's a misconfigured TLS certificate, or a key stored in plaintext in a config file, or a timing leak in a library that nobody audited. The post-quantum transition is an opportunity to revisit a lot of that accumulated technical debt, but it's also an opportunity to introduce new mistakes at scale.
Let's get practical for a minute. For someone listening who's in a technical role — developer, security engineer, IT decision-maker — what does "paying attention to this" actually look like?
A few concrete things. First, cryptographic inventory. If you're responsible for a system that uses encryption, you need to know where RSA and elliptic-curve keys are being used. Certificate lifetimes, key exchange mechanisms, signature schemes — map it out. Most organizations don't have this visibility, and you can't migrate what you haven't inventoried.
That sounds tedious and important.
Extremely both. Second, watch the library updates. OpenSSL, BoringSSL, LibreSSL — these are the cryptographic libraries that underpin most software. OpenSSL has been adding post-quantum algorithm support. When your dependencies update to include these, that's the implementation layer starting to move. Third, for anything with a long operational lifespan — embedded systems, hardware security modules, infrastructure with a ten-year replacement cycle — the procurement decisions being made now need to account for post-quantum requirements. If you're buying a hardware security module today that will still be in service in 2035, it needs to support post-quantum algorithms.
And for the more exotic stuff — homomorphic encryption, ZK proofs — is there anything actionable yet?
Zero-knowledge proofs, yes, in specific domains. If you're building identity verification systems, credential systems, or working in the blockchain space, ZK proof libraries are mature enough to be worth serious evaluation. The zkSync and StarkNet ecosystems have production ZK proof infrastructure. For homomorphic encryption, the honest answer is that unless you're in a domain with specific high-value privacy requirements — genomics, healthcare data sharing, multi-party financial computation — you're probably watching and waiting. The tooling is improving fast enough that checking back in eighteen months is reasonable.
There's a version of this story that's kind of hopeful. Like — the field saw the threat coming, ran an open international process to develop defenses, standardized them, and is now in the early stages of deploying them. That's actually a pretty functional response compared to how these things usually go.
Compared to, say, the Y2K analogy — where the deadline was fixed and the response was mostly reactive — the post-quantum transition has had a decade of proactive lead time. That's genuinely unusual. The concern is whether the deployment pace will match the threat timeline. Ten to twenty years sounds comfortable until you consider the inertia of global infrastructure. The internet still has systems running cryptographic standards from the nineties. Not because the standards are good, but because nobody updated them.
The internet is basically the Voynich manuscript of infrastructure. Layers of things nobody fully understands, accumulated over decades, that somehow still function.
I'm going to use that. The difference is we actually need the internet to keep working while we translate it.
And unlike the Voynich manuscript, we at least know what language it's supposed to be in. Alright — what's the open question you're sitting with on this?
The one I keep coming back to is the assumption that lattice problems are hard. We've built the post-quantum future on that assumption. NIST has built it. The cryptographic community has largely converged on it. And I think it's probably right — the evidence is strong. But I remember that we also thought RSA was going to be fine forever, and the threat to it came from a direction nobody expected: not a better classical algorithm, but a different model of computation entirely. Is there a different model of computation we're not thinking about that makes lattice problems tractable? I genuinely don't know. Nobody does. And that uncertainty is the thing I can't fully resolve.
That's the honest version of "we're probably fine." Which is the best version of it, actually.
It's all we have. Cryptography is ultimately a bet on the limits of adversarial ingenuity. You're saying: here's a problem, we believe it's hard, we're staking security on that belief. The Voynich manuscript reminds us that beliefs about what's solvable can be wrong in both directions.
Well, that's a suitably unsettling place to leave it. Thanks to Hilbert Flumingtop for producing — as always, the show exists because he makes it happen. And Modal is keeping our pipeline running, which at this point feels like its own form of infrastructure we'd rather not think too hard about. If you want to dig into the full back catalog — two thousand one hundred and fifty episodes now, which is either impressive or alarming depending on your perspective — everything's at myweirdprompts.com. This has been My Weird Prompts. Leave us a review if you're enjoying the show — it genuinely helps.
See you next time.