Imagine a world where you don't actually write code, but you describe exactly what you want in plain, conversational English, and the computer just builds it. Perfectly. No syntax errors, no missing semicolons, no cryptic memory leaks. We keep hearing that we are closer than ever to this reality, but there is a massive paradox at the heart of it. Even as we use natural language to generate the code, the actual output—the stuff the machine actually runs—is often more alien and harder for a human to read than the code we used to write by hand.
It is the ultimate "black box" problem, Corn. I am Herman Poppleberry, and today's prompt from Daniel is about that exact gap—the narrowing distance between natural language and code. Daniel is looking at how we have shifted from typing logic to describing intent through AI agents, but he is asking a much deeper question: can we use these agents to make the code itself more intelligible in its raw format?
It is a great question because, let’s be honest, some of the stuff coming out of modern LLMs looks like it was written by a caffeinated spider. It works, but if you have to go in and debug it, you’re in trouble. By the way, fun fact for the listeners—Google Gemini Three Flash is actually writing our script today, which is fitting given we are talking about the intersection of human language and machine logic.
It is very meta. And what I love about Daniel's prompt is that he points out this is not a new obsession. This quest to make computers understand "Human" predates the current AI revolution by literal decades. We like to think we are the pioneers because we have GitHub Copilot and Cursor, but the giants of the nineteen sixties and seventies were already trying to crack this nut.
Right, back when a "computer" took up an entire room and had less processing power than my toaster. So, before we get into the modern AI agents, we have to talk about how we got here. Because "intelligible code" is a moving target, isn't it? What a developer in nineteen seventy thought was "readable" would probably look like a punch card nightmare to a twenty-something dev today.
Precisely. Well, not precisely—I mean, you are on the right track. The definition of "readability" has evolved as our abstractions have moved higher. In the early days, "natural language programming" was the holy grail. Think about IBM in the mid-sixties. They had this project called the Mathematical Formula Compiler. The goal was to take English-like input and translate it into FORTRAN.
Wait, so they were trying to write English to get FORTRAN? That feels like trying to use a poem to generate a blueprint.
That is actually a very good way to put it. The mechanism they used back then was essentially a constrained grammar. They weren't using probabilistic models or neural networks; they were using strictly defined rules. If you said "Add A to B and store in C," the system could parse that because it fit a specific template. But the moment you deviated from the template—if you said "Hey, could you grab A and B and maybe put the sum over there in C?"—the system would just blink at you. It had no concept of semantics, only syntax.
So it was "natural language" in the same way that a specialized remote control is "natural." You still had to memorize the buttons; the buttons just happened to have English words on them. I imagine the tradeoff there was between expressiveness and precision. You can be very precise in FORTRAN, but it’s hard to be expressive. In English, you can be incredibly expressive, but you’re usually about as precise as a weather forecast.
That was the wall they hit. The "Expressiveness-Precision Gap." Early systems were intelligible only if you stayed within a very small box. If you look at the nineteen seventies, you have the "Programming by Example" paradigm. Smalltalk is the huge name here. Alan Kay and the team at Xerox PARC wanted a system that was so "self-documenting" that the code and the explanation were almost the same thing. They leaned into object-oriented design not just for the technical benefits, but because it mapped better to how humans describe the world. "The bucket pours the water" is easier to understand than a series of memory address shifts.
I remember reading about Smalltalk. It felt very "live." You could poke the objects and see what they did. But even then, it didn't really go mainstream in the way they hoped. Why do you think that is? If we had the blueprint for readable, human-centric coding in the seventies, why did we spend the next forty years arguing over where to put curly braces in C plus plus?
I think it comes down to the "Biological Bottleneck." Human language is messy, redundant, and context-dependent. Computers, until very recently, couldn't handle context. They needed the "biological bottleneck" of formal languages—C, Java, Python—to act as a filter. We had to squeeze our thoughts through the narrow pipe of strict syntax so the machine didn't get confused. The "intelligibility" was sacrificed for "executability."
And now we have the opposite problem. We have LLMs that can handle the messiness. I can tell a model, "Make the button look kind of like a sunset but also professional," and it will spit out three hundred lines of CSS. It’s executable, but is it intelligible? If I look at that CSS, I don't see "professional sunset," I see hex codes and flexbox properties that make my head spin.
And that is the core of Daniel's prompt. Can we use the AI not just as a translator that turns English into "Spider Web Code," but as a mentor that structures the code to be inherently more understandable to us? Because right now, tools like Copilot are basically high-speed autocomplete. They are biased toward what exists in their training data, which is often messy, legacy code.
It’s like the AI learned to speak by listening to people mumble in a dive bar. It can communicate, but it’s not exactly Shakespeare. What I find interesting is this idea of "prompt rot." If I use an AI to generate a thousand lines of code, and I don't really understand the nuances of how it's structured, I'm just deferring the technical debt. Six months from now, when I need to change how the "sunset" looks, I'm staring at a wall of machine-generated logic that has no human soul in it.
You mentioned a "human soul," which sounds poetic, but technically, what you're talking about is "Intent Preservation." When a human writes code, they leave breadcrumbs. Variable names, comments, the way functions are decomposed—these are all signals of intent. AI-generated code often misses the "Why." It gives you the "What" perfectly, but the "Why" is buried in the prompt, which might not even be saved in the codebase.
So, how do we close that? If we’re looking at the history, we went from "English to FORTRAN" in the sixties, to "Smalltalk objects" in the seventies, to "CASE tools" in the nineties—which were those "Computer-Aided Software Engineering" things that promised you could just draw diagrams and the code would appear. Those were a disaster, by the way. They produced code that no human could ever maintain.
CASE tools are the perfect cautionary tale for where we are now. They were "intelligible" at the top level—you had your nice little flowcharts—but the "raw format" they generated was garbage. It was machine-generated boilerplate that was never meant to be read by human eyes. But here is the difference in 2026: we now have models that actually understand the concept of "Clean Code."
Do they, though? Or do they just know that "Clean Code" is a phrase that often appears near certain patterns in GitHub?
It is a bit of both, but the result is the same. We are seeing a shift where we can prompt for "Intelligibility." Instead of just saying "Write a function that sorts this list," you can say "Write a function that sorts this list using a strategy pattern, with descriptive variable names that a junior developer could understand, and include a docstring that explains the time complexity trade-offs." You are effectively using natural language to enforce a higher standard of "raw format" readability.
That feels like the "AI Rulebook" concept we've touched on before. But it still requires the human to know what "good" looks like. If "everyone is a programmer," as Jensen Huang says, but nobody knows what a "strategy pattern" is, are we just going to end up with a world full of working but completely opaque software?
That is the risk. There is a study from Science magazine earlier this year that looked at the global diffusion of generative AI in coding. They found that while productivity is way up, the "skill gap" is actually widening in a strange way. People who understand the underlying logic are using AI to write incredibly elegant, readable code. People who don't are using it to build "Frankenstein Apps" that work until they don't.
It reminds me of the transition from hand-drawn blueprints to CAD software. The software makes you faster, but if you don't understand structural engineering, you're just drawing a building that’s going to fall down faster. But Daniel’s point about making code "progressively more intelligible" makes me think about the code itself evolving. Like, why are we still using Python or Javascript as the target? Why isn't there a "Human-AI Intermediate Language" that is designed specifically to be read by both?
That is a fascinating thought. Think about something like "Mojo" or even some of the newer DSLs designed for AI agents. The idea is to strip away the "biological bottleneck" of legacy syntax. If the machine doesn't need curly braces or indentation to understand structure, why are we still forcing it to generate them? We could have a format that looks more like a structured logic tree in plain English, but has the mathematical rigour of a formal language.
See, that sounds like COBOL’s original promise. "It looks like English, so managers can read it!" And we all know how that turned out. We’re still trying to migrate off COBOL systems fifty years later because "looking like English" doesn't actually mean "being easy to understand." Sometimes a concise mathematical symbol is much more intelligible than a paragraph of "ADD THIS TO THAT GIVING THE OTHER."
You are hitting on a very important distinction: "Verbose" is not the same as "Intelligible." COBOL failed because it was verbose. It forced you to use English words for things that were fundamentally machine operations. True intelligibility comes from mapping the code to the problem domain. If I’m building a banking app, the code should talk about "Accounts," "Ledgers," and "Transactions," not "Arrays," "Pointers," and "Buffers."
And that is where the modern AI agent actually has a chance to succeed where the sixties failed. The LLM understands the "domain." It knows what a "Ledger" is. So when it generates the code, it’s not just translating syntax; it’s performing a conceptual mapping. It’s like having a senior architect sitting next to you who says, "Okay, I’ll write the boilerplate, but I’ll name everything so it actually makes sense in the context of a bank."
I mean—it is exactly that. Wait, I shouldn't say exactly. It is precisely... no, I am banned from that too. It is a very accurate description of the potential. What I find wild is how this changes the act of "Code Review." In the past, you reviewed code to find bugs. In the future, you might "review" the AI's logic by having a conversation with it. "Why did you choose an asynchronous approach here?" and the AI responds by refactoring the code to be more "intelligible" based on your feedback.
"Make it more intelligible" is a bit of a "vibe" prompt, isn't it? We’ve talked about "vibecoding" before—where you just keep poking the AI until it works. But if we want to move past that, we need concrete metrics for intelligibility. How do you measure if code is "human-readable" in a way that an AI can optimize for?
There are actually some interesting research papers on this. They look at things like "Cognitive Load" metrics. How many "entities" does a human have to keep in their head to understand a single function? You can actually train a reward model for an AI agent that penalizes high cognitive load. So the AI purposefully breaks long functions into smaller, named pieces, not because the computer needs it, but because the human brain does.
So the AI is basically acting as a "Human-to-Machine Buffer." It understands the machine’s need for efficiency, but it also understands the human’s need for simplicity. It’s a bit like a translator who doesn't just give you the literal words, but gives you the "gist" and the "nuance" so you actually get it.
And if you look at the history, this is what the pioneers were dreaming of. When you look at the "Programming by Example" stuff from the seventies, the goal was to remove the "tax" of syntax. They wanted the computer to be a collaborator. We’ve had this weird detour for forty years where the computer was a "servant" that only spoke a very difficult foreign language. Now, we’re finally getting back to that "collaborator" model.
But let’s get practical for a second. If I’m a developer today, or someone just starting out using these tools, how do I actually use this to my advantage? Because I’ve seen the "Frankenstein Code" first-hand. It’s easy to get lazy and just let the AI spit out whatever it wants. How do we drive toward this "intelligible raw format" Daniel is talking about?
I think the first actionable takeaway is to stop treating the AI as an "Oracle" and start treating it as a "Junior Dev with an infinite library." When you prompt, you should be explicit about the "Code Style" and "Architectural Patterns." Don't just ask for a feature. Ask for a feature implemented with "Clean Architecture" principles. Use the AI to teach you what good code looks like by forcing it to explain its choices in the code itself.
I’ve actually tried that. I’ll tell it, "Write this, but explain every line like I’m five years old in the comments." It’s actually a great way to learn. But the problem is, the comments often get out of sync with the code. If the AI changes the logic in the next prompt, but leaves the old comments, now you have "Intelligible Lies." That is almost worse than opaque truth.
That is where the "Agentic" part comes in. A true AI agent isn't just generating text; it’s running tests, it’s linting the code, it’s verifying that the comments match the execution. We are moving toward "Self-Healing Codebases" where the intelligibility is a constant, monitored state. If the "readability score" drops below a certain threshold, the agent automatically refactors.
Can you imagine a "Readability Linter"? It just blocks your pull request because "This function is technically correct but it’s a total mess for a human to read. Go sit in the corner and think about what you’ve done." That would be a nightmare for some, but probably a godsend for long-term maintenance.
It would change the culture of engineering. Right now, we value "cleverness." The person who can write a one-liner that does the work of fifty lines is seen as a wizard. But in a world of AI-generated code, "cleverness" is a liability. We want "obviousness." We want code that is so boring and clear that an AI agent six months from now can't possibly misinterpret it when it’s asked to add a new feature.
"Boring is better." I like that. It’s very "Sloth-like" of me to appreciate that. But what about the "Everyone is a programmer" future? If I'm a small business owner and I use an agent to build my website, I don't care about "strategy patterns." I just want the "Buy Now" button to work. Does the "intelligibility" of the code even matter if the "natural language description" is the new source of truth?
This is a huge debate in the industry right now. Is the "Prompt" the new "Source Code"? If it is, then the generated Javascript is just an "artifact," like the binary files that come out of a compiler. You don't read the binary, so why would you read the Javascript? But the problem is that we don't have a "Debugger" for prompts yet. When the website breaks, you can't just yell at the prompt to "Fix it" if you don't know why it broke. You have to go into the "artifact" to find the bug.
Oops, I almost said it. You're right. Until we have "Perfect AI" that never makes a logic error, we still need to be able to peer under the hood. And if the hood is welded shut with machine-generated spaghetti, we’re in trouble. It’s like owning a car where the engine is a solid block of plastic. Great until it makes a weird noise, and then you have to throw the whole car away.
That is why Daniel’s focus on the "raw format" is so smart. Even if we spend ninety percent of our time in the "Natural Language" layer, that bottom layer needs to be "Human-Serviceable." We need "Right to Repair" for our AI-generated software. And "Right to Repair" requires "Ability to Understand."
I love that. "Right to Repair" for code. So, looking back at the history again—we had the "Mathematical Formula Compiler" in sixty-five, Smalltalk in the seventies, the rise of "High-Level Languages" like Python. Every step was about making it more "Human." But we’ve always been limited by the computer’s inability to grasp context. Now that the "Context Barrier" has been broken, what’s the next logical step?
I think the next step is "Bidirectional Translation." Right now, it’s mostly one-way: "English to Code." But the real "Aha!" moment comes when the AI can take a legacy codebase—maybe some of those COBOL systems we talked about—and "De-compile" them back into an intelligible, modern format that preserves the original intent. It’s like a "Restoration Artist" for code.
"Code Archaeology." I can see it now. You find some ancient script that’s been running a power grid since nineteen eighty-four, and the AI "cleans it up" not just so it runs faster, but so a human can finally understand how it works. That would be a massive unlock for the economy. Billions of dollars are stuck in "Legacy Debt" simply because the people who wrote the code are either retired or... well, not with us anymore.
And those people were often very "intelligent" in their coding, but they were working under the constraints of their time. They had to use cryptic variable names because they only had eight characters to work with! The AI can "re-hydrate" those names based on the context of the logic. It can see that VAR_A is being used as an "Interest Rate" and just rename it throughout the whole system.
That feels like the "Progressively more intelligible" part Daniel mentioned. It’s not a one-time thing. It’s a constant process of refinement. The code "evolves" toward clarity. It’s the opposite of "Entropy." Usually, codebases get messier over time. With AI agents, they could actually get cleaner.
It turns the "Second Law of Thermodynamics" on its head for software. But it requires us to change how we think about "Ownership." If the AI is constantly "cleaning" the code, who "owns" the final version? And how do we ensure the AI doesn't "clean" away a subtle but necessary edge case? This is where the "Human-in-the-loop" isn't just a safety feature; it’s a design requirement.
It’s like having a very enthusiastic housekeeper who occasionally throws away your "important" pile of junk because it "looked messy." You have to be able to say, "No, no, that specific messiness is actually a very important bug fix for a nineteen ninety-five browser. Put it back."
That is the "Chesterton's Fence" of coding. Don't tear down a fence until you know why it was put up in the first place. AI agents are great at tearing down fences to make things "intelligible," but they need the historical context to know which fences are actually keeping the bulls in.
So, we’ve covered a lot of ground here. We’ve gone from the room-sized computers of the sixties trying to parse "Add A to B," through the object-oriented dreams of the seventies, to the current "Agentic" era where we’re using LLMs to "vibecode" our way into the future. It’s a wild arc. What’s the big takeaway for someone listening who’s maybe feeling a bit overwhelmed by how fast this is moving?
I think the takeaway is that "Natural Language" isn't just a new way to "Type." It’s a new way to "Think" about problems. We are moving from being "Syntacticians"—people who care about where the commas go—to being "Architects of Intent." The most important skill you can develop in 2026 isn't learning a specific language like Python or Rust; it’s learning how to describe complex logic clearly, precisely, and with enough context that an AI can generate something "intelligible."
And don't be afraid to poke the machine. If the code it gives you looks like gibberish, don't just accept it because it works. Tell it to try again. Tell it to be "boring." Tell it to explain itself. You are the boss of the agent, not the other way around.
And remember that the "Raw Format" still matters. We are not at the point where we can completely ignore the underlying code. Think of it like a "Safety Net." You want that net to be made of strong, visible rope, not invisible fishing line that you’re going to get tangled in when things go wrong.
I like that. "Strong, visible rope." It’s much easier to climb. Well, this has been a fascinating dive into Daniel’s prompt. It’s amazing how much of our "modern" problems were actually identified sixty years ago. We’re just finally getting the tools to actually solve them.
It is a great reminder that progress is often a "Spiral," not a "Line." We come back to the same ideas, but each time we have more power to make them real. I'm genuinely excited to see where this "Intelligible Code" movement goes. Imagine a world where anyone can read a codebase and understand it as easily as reading a well-written book. That is a world with a lot less frustration.
And a lot more "Sloth-approved" naps, because we won't be spending all night debugging machine-generated spaghetti.
I can't argue with that. Before we wrap up, I want to say thanks to our producer Hilbert Flumingtop for keeping the gears turning behind the scenes.
And big thanks to Modal for providing the GPU credits that power this show and help us explore these weird prompts every week.
If you’re enjoying the show, we’d love it if you could leave us a review on Apple Podcasts or Spotify. It’s the best way to help other curious humans—and maybe some curious AI—find us.
You can also find us at myweirdprompts dot com for all the show notes and ways to subscribe. We’re also on Telegram if you want to get notified the second a new episode drops—just search for My Weird Prompts.
This has been My Weird Prompts. I'm Herman Poppleberry.
And I'm Corn. See you next time, and keep those prompts coming!
Goodbye.
Bye.