You know Herman, I was looking at some code the other day, and by looking at it, I mean I was watching an agent spin up a full stack application in about forty-five seconds while I sipped my coffee. It felt amazing, like I had finally achieved the dream of being a digital architect who never has to touch a single brick. But then I realized something terrifying. If the power went out and the agent disappeared, I probably couldn't tell you why the database schema was structured the way it was. I was just vibing. It is that classic vibecoding paradox. We are building faster than ever, but we are understanding less of what we build. Today's prompt from Daniel is about exactly this shift, moving from assistive artificial intelligence that just does the work for us to pedagogical artificial intelligence that actually teaches us while it works.
It is a massive shift, Corn, and honestly, it is the most important conversation in engineering right now. For the record, I am Herman Poppleberry, and I have been diving into the research on this all morning. Daniel is really hitting on the nerve of the industry. We have seen this explosion of what people are calling agentic engineering. It is not just the old school autocomplete where the artificial intelligence suggests the next three words of your function. We are talking about autonomous agents that manage multi-step tasks across the entire delivery lifecycle. But there is a hidden cost. Anthropic released a study in January twenty-six that found a seventeen percent decrease in skill mastery among developers who use these unguided artificial intelligence tools daily. We are offloading the thinking to the machine, and our own mental muscles are starting to atrophy.
Seventeen percent is a pretty steep drop for a profession that prides itself on being the smartest guys in the room. It is funny because the markets are rewarding this trend like crazy. I mean, Cursor reached a twenty-nine point three billion dollar valuation this month. People clearly want the speed. They want to vibe their way to a minimum viable product. But I guess the question is, are we building software that we actually understand, or are we just prompting black boxes and hoping the load balancer holds up? That brings us to the core of the pedagogical divide. How do we actually learn while the machine does the heavy lifting?
That is exactly what we need to define. Pedagogical artificial intelligence agents are the necessary counter-balance to vibecoding. Instead of just outputting a finished block of code, these tools are being designed to create what researchers call cognitive scaffolding. They want to turn the agent into a mentor rather than just a high speed contractor. We are moving from a world of code completion to a world of cognitive support. If you look at the landscape as of March twenty-five, twenty-six, eighty-four percent of developers are using these tools daily. If we don't fix the learning loop, we are going to have a generation of engineers who can't debug a system without an agent holding their hand.
I like the sound of a mentor, mostly because it implies I still have to be the boss. But how does that actually work in practice? If I am in a hurry to ship a feature, the last thing I want is my editor giving me a lecture on the history of asynchronous programming. How do these pedagogical agents avoid being the digital version of that one professor we had who would never just give you the answer?
It comes down to something called the worked example effect. This is a concept from cognitive load theory that suggests beginners learn better by studying worked out solutions rather than trying to solve problems from scratch with no guidance. Pedagogical agents use artificial intelligence generated snippets as a roadmap. A great example of this is Microsoft Agent Lightning, which just launched this month. It uses an observational learning engine. Instead of just applying a fix, it shows you the reasoning path. It says, I am choosing this specific design pattern because of these three constraints in your current architecture. It forces you to see the logic before you hit accept. It is not a lecture; it is a transparent decision log.
So it is essentially showing its work. Like when we were in school and the math teacher would take off points if you didn't show the long division, even if the answer was right. I used to hate that, but I suppose if the goal is to actually know how to do division, it makes sense. I saw something similar with Google Antigravity. They are replacing those cryptic raw logs with what they call visual artifacts. Instead of scrolling through thousands of lines of terminal output, you see a visual representation of the agent's decision making process. It feels less like reading a transcript and more like watching a documentary of how your app was built.
The visual aspect of Antigravity is huge because it helps with state space search. When an agent is trying to solve a coding problem, it explores thousands of possible paths. Normally, you only see the final path it chose. But Antigravity shows you the branches it considered and why it rejected them. That is where the real learning happens. You see the edge cases the agent identified, and suddenly you are thinking about those edge cases too. You are learning the boundaries of the problem space by watching the agent navigate it. It turns the black box into a glass box.
But let's pivot from the what to the how. Specifically, how these agents are designed to teach us. I am curious about this idea of productive difficulty. I saw that platforms like AlgoCademy are intentionally making things harder for users. That feels like a bold move in an era where everyone is obsessed with friction-less experiences. Why would I want my tools to be difficult?
Because friction is where the learning happens, Corn. If everything is too easy, your brain doesn't bother to encode the information into long term memory. It is the difference between using a global positioning system to get somewhere versus actually learning the map. If the global positioning system tells you every turn, you will never know the neighborhood. The Healthy Choice research project that came out late last year used complexity management agents. These agents analyze your skill level and intentionally introduce roadblocks or ask you to explain a specific part of the code before it proceeds. They call it productive difficulty because the struggle is what forces the cognitive engagement. It prevents you from just clicking through.
I can see the marketing pitch now. Buy our software, it is intentionally annoying and makes your job harder. But I get it. It is like a gym for your brain. You don't go to the gym to find the lightest weights; you go for the resistance. I wonder if this changes how we think about computer science education. If eighty-four percent of developers are using artificial intelligence daily, the way we teach coding in universities has to be completely broken right now. We are still teaching people how to write syntax that an agent can write better and faster.
You are hitting on a major debate. Some people are calling for artificial intelligence native computer science degrees. Instead of focusing on syntax mastery in the first year, you would focus on system design and agentic workflows. Andrew Ng has been a big proponent of this. He talks about how the workflow is shifting. It is not about writing a prompt and getting a result. It is an iterative process where the agent builds something, you critique it, the agent refines its own output, and you move through these loops together. The skill becomes knowing how to guide the agent's reasoning.
That sounds a lot like what Yann LeCun was talking about with his tripartite cognitive architecture. He wanted to move artificial intelligence from just being a passive observer that predicts the next word to something that is active and action based. If the artificial intelligence is acting based on a model of the world, and we are interacting with that model, the feedback loop becomes much tighter. It is not just a chat box anymore. It is a shared mental space.
The tripartite architecture is fascinating because it includes a world model, a cost function, and a perception module. When you apply that to coding, the agent isn't just guessing code; it is trying to minimize the cost function of bugs, technical debt, and performance issues. As a developer, you can actually query the agent's cost function. You can ask, why did you think this approach had a lower cost than the one I suggested? When the agent explains its internal trade offs, that is a masterclass in engineering. You are getting the benefit of the agent having scanned every open source library on the planet, but it is being distilled into a specific lesson for your specific project.
I have to admit, I have used the agent to explain things to me before, and it is usually better than any documentation I have found. Documentation is often written by someone who already knows the answer and has forgotten what it is like to be confused. The artificial intelligence, on the other hand, can meet you exactly where your confusion is. It is like having a personalized tutor that has infinite patience and never gets tired of you asking what a pointer is for the tenth time.
That is the observational learning piece that Daniel mentioned. You are watching the agent work, and you are internalizing the patterns. Over time, you start to recognize the architectural smells or the elegant solutions before the agent even suggests them. It is a form of apprenticeship. In the old days, you would sit next to a senior developer and watch them code. Now, the artificial intelligence is the senior developer, but it is one that can explain every single keystroke in real time.
It is funny you call it a senior developer. I have seen some artificial intelligence code that looks more like a junior developer who stayed up all night on too much caffeine. It is fast, but it is messy. That is where I worry about the black box problem. Addy Osmani was warning about this in February. He called it the risk of black box software. If no human understands the underlying logic because an agent generated the whole thing based on a high level intent, then the long term maintainability becomes a nightmare. If I don't know how the database works, I won't realize the agent just hallucinated a connection string that is going to leak all our data.
That is why the pedagogical part is so critical. We have to move away from the idea of the artificial intelligence as a black box and toward the idea of the artificial intelligence as a glass box. You should be able to see through it. Tools like Antigravity that use visual artifacts are a step in that direction. But we also need to change our own habits. We have to stop hitting the accept button the second the code pops up. We need to adopt a habit of asking why. Why this library? Why this loop structure? Why this security protocol?
I think a lot of people are afraid that if they don't hit accept immediately, they will fall behind. If my colleague is vibecoding and shipping five features a day, and I am sitting here having a deep philosophical discussion with my agent about the merits of functional programming, am I going to get fired? There is a real pressure to prioritize speed over mastery, especially when you see those twenty-nine billion dollar valuations.
It is a short term versus long term play. The person shipping five features a day via pure vibecoding is building a mountain of technical debt that will eventually collapse. In a year, they won't be able to fix a single bug because they don't understand how the system fits together. The person using pedagogical artificial intelligence is building their own expertise alongside the codebase. They are becoming more valuable, not less. The goal of the agent should be to make the human redundant in the specific task but essential in the understanding.
That is a great way to put it. Redundant in the task, essential in the understanding. It reminds me of the transition from manual calculators to computers. We didn't stop learning how math works just because we got spreadsheets. We just started doing more complex math. Maybe that is the future here. We aren't going to stop being engineers; we are just going to be engineering at a much higher level of abstraction.
And to do that, we need a new vocabulary. We are already seeing terms like observational learning and the worked example effect. But I think we will see more terms around agentic pedagogy. Maybe something like cognitive offloading audits, where you look at your workflow and measure how much of the thinking you are actually doing versus just delegating. Aishwarya Srinivasan has been writing about this recently, emphasizing that the human needs to remain the moral and logical anchor of the development process.
I like the idea of a cognitive audit. It is like checking your screen time, but for your brain. If I spent eight hours today and didn't solve a single logic puzzle myself, maybe I need to turn up the productive difficulty on my agent. I want to talk about some practical takeaways for people listening who are probably feeling a bit of that vibecoding guilt right now. How do we actually start using these tools as educational resources instead of just crutches?
I have a concrete checklist for this. The first thing is to change your default interaction. Instead of asking the agent to write the code, ask it to explain the approach first. Say, I want to implement a real time notification system, what are the three best ways to do this and what are the trade offs of each? Force it to give you the high level architecture before a single line of code is written. This engages your brain in the design phase, which is where the most important learning happens.
That is a great first step. I started doing that recently, and it is actually really helpful. It turns the agent into a sounding board. What is the second thing on your list?
Prioritize tools that implement productive difficulty. If your editor is just a giant autocomplete that hides all the complexity, you are going to lose your edge. Look for tools like Antigravity or Microsoft Agent Lightning that expose the reasoning path. If a tool makes it too easy to ignore the details, it is probably not helping you grow in the long run. You want a tool that challenges you, not just one that pampers you.
And what about after the code is written? I feel like I often just move on to the next task once the tests pass.
That is where retrospective prompting comes in. After the agent has built something and it works, ask it to explain the most complex part of what it just did. Say, explain this recursive function to me like I am an intermediate developer who understands the basics but struggles with this specific pattern. Use the agent's ability to summarize and distill information to fill in your own knowledge gaps. You have the working code right there, which is the perfect worked example. Don't let that learning opportunity go to waste.
That is huge. It is like having a post mortem for every feature you ship, but it only takes five minutes. I also think we need to be honest about our own limitations. If you are using an agent to build something in a language you don't know at all, you aren't an engineer; you are a tourist. There is nothing wrong with being a tourist if you just want to build a quick side project, but if you want to be a professional, you have to eventually learn the language. Use the agent as a translator until you can speak for yourself.
That is a great analogy, Corn. Don't be a permanent tourist. Use the agent to get your foot in the door, but then do the work to move into the house. I think we are going to see a lot more of this in the coming years. As the agents get more powerful, the value of human mastery will actually go up, not down. Because when things go wrong, and they always do, you need someone who knows how to go beneath the surface.
It is the difference between a pilot who only knows how to use the autopilot and a pilot who actually knows how to fly the plane. When the sensors freeze up at thirty thousand feet, you want the person who spent time in the simulator practicing the hard stuff. I think we should wrap this up, but I am feeling a lot better about my coffee sipping while the agent works. As long as I am asking it why it is doing what it is doing, I am still technically working out.
You are definitely working out, Corn. Just keep those mental muscles engaged. This shift from assistive to pedagogical artificial intelligence is probably the most important transition in software development since the move to high level languages. It is about keeping the human in the loop, not just as a rubber stamp, but as a genuine partner in the creative process. We aren't anti-artificial intelligence; we are pro-mastery.
Well, I am going to go see if I can convince my agent to teach me something about quantum encryption, or maybe just how to make a better sandwich. Either way, it is going to be a learning experience. Thanks for diving into the weeds with me, Herman. You really are a font of knowledge, even if you are a donkey.
I take that as a compliment, Corn. Donkeys are known for their stamina and their intelligence, and I have both in spades when it comes to technical research. This has been a great exploration of the pedagogical divide.
Big thanks to Daniel for the prompt today. It really got us thinking about the future of how we all do our jobs. If you want to dive deeper into some of the stuff we talked about today, check out episode fourteen sixty-four where we talked about Claude Code and the agentic harness. It gives a lot of the background on how we got to this point.
And if you are interested in the rulebook side of things, episode fourteen forty-seven on programming agents in plain English is another good one. It connects really well to this idea of human readable logic and how we can maintain control over autonomous systems.
Thanks as always to our producer Hilbert Flumingtop for keeping the gears turning behind the scenes. And a huge thank you to Modal for providing the graphics processing unit credits that power this whole operation. We literally couldn't do this without them.
This has been My Weird Prompts. If you are enjoying the show, a quick review on your podcast app really helps us out and helps other people find these deep dives into the future of engineering.
You can also find us at myweirdprompts dot com for the full archive and all the ways to subscribe. We will be back soon with another prompt.
Goodbye everyone.
See ya.