The stack you choose might not even be your choice anymore. It is increasingly becoming the one your AI agent knows best.
Herman Poppleberry here. And that is a provocative way to start, Corn, but it is statistically landing right in the center of the target for 2026. We’ve moved past the "helper" phase of AI. We are now in the "architectural coercion" phase.
It feels like we have hit this weird tipping point. You used to pick a stack because you liked the syntax, or the documentation was good, or maybe your lead developer was a fanatic about a specific database. Now? I feel like people are just looking at their IDE and saying, hey, whatever Copilot can write without hallucinating, that is what we are shipping. We have got a great prompt from Daniel today delving into exactly this. He is looking at how the fundamentals of web design—your frontends, your ORMs, your APIs, and your databases—are being reshaped by agentic development.
It is a massive shift. And just to put it out there, today’s episode is actually powered by Google Gemini 3 Flash. It is writing our script, which is fitting, considering we are talking about AI-driven defaults. We're essentially letting the subject matter dictate the medium today.
Meta. I like it. So, let me read what Daniel sent over because he frames this really well. He says: "The fundamentals of web design can be broken down into frontend libraries, ORMs, API engines, and databases. While you can theoretically mix and match as you see fit, certain combinations are much more prevalent than others. These are called development stacks. Let's look at some of the classic stacks used in web development today and how frameworks like Astro fit in. Agentic development tends to direct developers towards whatever stack the model is most proficient in. AI agents also make using unfamiliar stacks much less intimidating because the agent will end up doing most of the actual coding. What effects are we seeing in how AI tools are changing developer preferences?"
Daniel is hitting on the provider bias issue that is defining 2026. We are moving away from the era where a human chooses a tool based on technical merit and into an era where the tool chooses the human based on training data density. Think about it: if you're an LLM, you've "read" the entire internet. But you haven't read it equally. You've read ten million lines of React for every one line of a niche framework like Qwik or Solid.
It is the path of least resistance, right? If I am a developer and I want to get a feature out by Friday, am I going to fight the AI to help me write a niche framework it barely understands, or am I going to just pivot to the one where it can generate a thousand lines of perfect code in ten seconds? It’s like choosing between a master craftsman who speaks a different language or a robot that speaks your language perfectly but only knows how to build one type of house.
Most people choose the robot house. We have seen this with the consolidation of the frontend. For years, we had the big three—React, Vue, and Angular—and then a million smaller ones like Svelte or Solid. But if you look at the landscape now, React has this gravity that is almost impossible to escape, not because it is necessarily the "best" anymore, but because the LLMs have seen so much of it that their error rates are significantly lower when generating React components compared to almost anything else.
But wait, Herman, isn't that a bit of a self-fulfilling prophecy? If I use React because the AI is good at it, I'm just contributing more React code to the training set for the next model. Are we ever going to see a new frontend framework break through again, or are we stuck in a "React Loop" forever?
It’s a real concern. I saw some data on this recently. In the January 2026 update for GitHub Copilot, they actually measured this. The accuracy for React and Next.js patterns was about forty percent higher than for Vue.js. Forty percent! That is not a marginal difference; that is the difference between an assistant and a burden. If you're using Vue, you're spending 40% more of your time correcting the "assistant." That’s a massive tax on your productivity just for choosing a different syntax.
And that creates a feedback loop. If the AI is better at React, more people use React. If more people use React, there is more React code on GitHub. If there is more code on GitHub, the next model is even better at React. It is a winner-take-all scenario driven by the weights of the model rather than the elegance of the code. It’s like the QWERTY keyboard—it’s not the most efficient, but it’s what everything is built for.
But let's talk about the actual components Daniel mentioned. He broke it down into the four pillars. Frontend, ORMs, APIs, and Databases. If we look at the classic stacks, you had things like MERN—MongoDB, Express, React, Node. Or the old-school LAMP stack with Linux, Apache, MySQL, and PHP. How are those holding up in this agentic world?
MERN is still hanging on, but the "E" and the "M" are being swapped out for more AI-friendly versions. Express is being replaced by things like Hono or just native Cloudflare Workers because agents are very good at writing those small, functional entry points. And MongoDB is facing stiff competition from PostgreSQL, specifically through providers like Supabase or Neon.
Why Postgres? Is it just because it is relational and models are better at logic?
Part of it is the schema. AI agents love schemas. If you give an agent a well-defined SQL schema, it can reason about your entire data layer with incredible precision. NoSQL is a bit more "vibes-based" in terms of structure, which can lead to agents making assumptions that break your app. Like, an agent might assume a field exists in a document because it saw it in a similar app, but in MongoDB, there’s no strict enforcement to stop that hallucination from crashing the query.
But the real kicker is the ORM—the Object-Relational Mapper. Prisma and Drizzle have become the "AI favorites" of 2026.
I have noticed that. Every time I ask an agent to set up a database layer, it immediately reaches for Prisma. Is it just the popularity, or is there a technical reason the AI prefers it?
Because Prisma is type-safe and incredibly verbose in its schema definition. For a human, writing a giant Prisma schema might feel like a lot of boilerplate. For an AI, that boilerplate is a map. It tells the agent exactly what the types are, what the relations are, and what queries are valid. It reduces the "search space" for the model. If the model knows that user.posts is a one-to-many relationship with a specific type, it won't try to write a query that treats it like a string.
So we are essentially seeing the rise of "LLM-friendly" documentation and frameworks. It is like we are designing software not for humans to read, but for models to ingest.
That is exactly what is happening. We are seeing a shift toward explicit, verbose type definitions over what we used to call "magical" abstractions. Remember when Ruby on Rails was the king because of "convention over configuration"? You didn't have to write much code because the framework just "knew" what you wanted.
Yeah, it was great until it wasn't. You’d hit a wall and have no idea what was happening under the hood.
Well, in an AI world, "magic" is a nightmare. If the framework is doing things behind the scenes that aren't explicitly written in the code, the AI has a harder time "seeing" the logic. Agents prefer "configuration over convention." They want everything spelled out in the types and the files. That is why TypeScript has become the absolute, non-negotiable standard. Writing plain JavaScript for an agent is like trying to give directions to someone in a fog. TypeScript provides the guardrails that allow the agent to run at full speed.
That leads us to Astro. Daniel specifically asked about how frameworks like Astro fit in. In 2026, Astro has gone from this cool static site generator to a major player, especially after Cloudflare picked them up. Why is Astro winning the AI era?
Astro is fascinating because it tackles the "performance paradox" of AI. One of the big problems with AI-generated code is that it tends to be bloated. If you ask an AI to build a page, it might give you five different React components, each with its own heavy dependencies, because it is just pulling from the most common patterns it knows. This leads to what people are calling "AI Slop"—websites that are technically functional but weigh five megabytes and run like garbage on a mobile phone.
I have seen those. It is like the AI just keeps adding div after div and import after import. It’s the digital equivalent of a hoarder’s house.
Astro’s "Islands Architecture" is the perfect antidote. It allows you to use those React components, or Vue components, or whatever the AI is good at writing, but it only ships the JavaScript for those specific "islands." The rest of the page is pure, fast HTML. It is a "Zero-JS by default" approach.
So it acts as a container that keeps the AI's messiness in check. But how does the agent handle the "Islands" concept? Does it understand where the boundaries are?
Precisely. And from an agent's perspective, Astro is very modular. The way Astro components are structured—with a clear separation between the "frontmatter" script and the HTML template—makes it very easy for an LLM to parse. It can handle the data fetching in the top section and the UI in the bottom section without getting confused. It provides a natural "separation of concerns" that aligns with how these models tokenized their training data.
And with Astro v6, which I think dropped earlier this year, they unified the dev server with the production environment. How does that help an agent?
It eliminates "environment drift." One of the most common ways an AI agent fails is when it writes code that works on its "internal" understanding of a server but fails when it hits a specific edge case in production. By making the dev environment an exact mirror of the production edge, the agent can run its own tests and be much more confident that what it wrote will actually work when deployed to Cloudflare or Vercel.
It is like giving the agent a better laboratory to work in. But let's look at the second-order effects here. Daniel mentioned that AI makes using unfamiliar stacks less intimidating. I want to poke at that. Does that mean we are going to see more diversity in stacks, or less? Because on one hand, you say the AI has a bias toward React. But on the other hand, it can help me write Rust even if I don't know Rust. Which force is stronger?
It is a tug-of-war. In the short term, the bias toward established stacks is winning because people want speed. But the "Death of the Junior Hurdle," as some researchers call it, is real. If you are a senior developer who understands architecture and logic, but you have never touched Go, you can now spin up a Go backend in an afternoon with a high-quality agent. The agent handles the syntax, the memory management quirks, and the boilerplate. You just provide the architectural oversight.
So the "barrier to entry" for a new language has dropped from months of study to a few hours of prompting. But doesn't that make us more dependent on the AI? If I use an agent to write Rust, and I don't actually know Rust, am I really "using" Rust, or am I just a passenger?
You're a pilot, Corn. You don't need to know how to build the engine to fly the plane. We saw a case study recently of a startup that needed a high-performance data processing layer. Normally, they would have stuck with Node.js because that is what their team knew. Instead, they used a multi-agent system to scaffold the whole thing in Rust. The agents wrote the memory-safe code, and the human developers focused on the business logic and the API design. They basically bypassed the two-year learning curve of becoming a "Rustacean."
That sounds amazing, but there has to be a catch. If I am building a "Black Box Stack" where I don't really understand the underlying code because the agent wrote it, what happens when it breaks at three in the morning? If I don't know Rust, I can't fix Rust when the AI is offline or hallucinating a fix.
That is the "Maintenance Debt" problem. We are seeing the rise of developers who are great architects but can't debug a complex SQL join or a memory leak in a language they don't actually speak. If the agent wrote it, often only another agent can fix it. You end up in this loop where you are just prompting for fixes without ever truly understanding the root cause. This is what we call "Prompt-Driven Debugging," and it’s a dangerous game.
It is like being a pilot who only knows how to use the autopilot. As long as the weather is clear, you are a genius. The moment the sensors fail, you are in big trouble. And this isn't just a theoretical worry. We're seeing it in the job market. Companies are hiring "AI-Augmented Developers," but they're finding that when things go sideways, the junior and mid-level staff are completely lost.
And that is where the "Junior" problem gets even weirder. If the AI is doing seventy percent of the work, what are the juniors doing? They aren't getting those "reps" in where they learn the fundamentals by making mistakes. They are just becoming "prompt reviewers." They’re looking at code, seeing that it runs, and hitting "approve." But they haven't developed the "code smell" that tells you a piece of logic is technically correct but architecturally a disaster.
We are essentially skipping the apprenticeship phase of software engineering. We're going straight from "I don't know anything" to "I am supervising a machine that knows everything." It's like skipping medical school and going straight to being a hospital administrator.
We are. And that might be fine for high-level web apps, but for critical infrastructure? It is a bit terrifying. But to Daniel’s point about developer preferences, we are seeing a shift in what "skill" means. Skill in 2026 isn't about knowing the syntax of a library. It is about knowing how to evaluate the output of an agent. It is about becoming a "code auditor" rather than a "code writer."
I wonder if this changes the "prestige" of certain stacks. You know how people used to look down on PHP or look up to Haskell? If an AI can write both equally well, does the "cool factor" of a language vanish? Does the "Haskell elite" disappear when an LLM can generate monads all day long?
It definitely levels the playing field. PHP has actually had a bit of a resurgence because it is so well-documented and there are billions of lines of it in the training data. Agents are incredibly proficient at PHP. If you want a functional CRUD app in ten minutes, an agent can give you a more secure and performant PHP version than a human could write in ten hours. The "uncool" languages are becoming the "efficient" languages.
That is a wild thought. The "uncool" languages might be the biggest beneficiaries of the AI era because they have the most training data. It’s like the revenge of the legacy code.
It is the "Autocomplete Trap." The tech that was popular in 2015 to 2022 is the tech the AI knows best. So we are seeing a "fossilization" of the web stack. It is harder for a new, superior framework to gain traction because the AI doesn't know it yet, so developers don't want to use it because they lose their AI productivity boost. Imagine a new framework comes out tomorrow that is 10x faster than React. If Copilot hasn't been trained on it, most developers won't touch it because they don't want to go back to typing every line by hand.
So innovation actually slows down because we are all tethered to the training data of the past. We're basically driving a car that only wants to go to places that were on the map five years ago.
Unless the framework is built "agent-first" from day one. Like what we discussed with Astro or Drizzle. If you build a tool that explicitly caters to how LLMs reason—by having very strict types, clear documentation, and minimal "magic"—you can overcome that gravity. You have to market your framework to the AI as much as you market it to the human.
Let's pivot to the practical side of this. If I am a developer listening to this, or I am running a team, how do I navigate this? Do I lean into the AI's preferences and just use Next.js and Tailwind for everything, or do I fight for autonomy? Is there a middle ground where I can still be innovative without sacrificing the speed of AI?
The first thing you have to do is an "AI Audit" of your stack. You need to actually test your agents against your codebase. If you are using a niche framework and your developers are spending more time fixing AI hallucinations than writing code, you are losing money. In 2026, aligning your team's skills with your AI's strengths is just basic business efficiency. You have to ask: "Is my stack helping my agent help me?"
That feels like a surrender, Herman. "I will use this tool because my robot likes it better." It feels like we're letting the algorithm dictate our creative choices.
It is not a surrender; it is a force multiplier. If you can move five times faster by using a stack the AI understands, why wouldn't you? Save your "autonomy" for the parts of the product that actually matter—the user experience, the unique business logic, the actual problem you are solving. Don't waste your human brainpower on reinventing the wheel for a database connection. We don't complain that we have to use "standard" electrical outlets in a house, right? We just use them so we can focus on the interior design.
Valid. But I also think there is an opportunity here to use agents to explore. Like Daniel said—use the agent to try an unfamiliar stack. If you have been a React dev your whole life, use an agent to build your next project in Svelte or even something more "out there" like Gleam. The agent acts as a universal translator. You can say, "Here is my React component, rewrite it in Gleam and explain the differences."
I love that. Use the AI to break out of your own personal "stack silo." Even if the AI is slightly less proficient in a newer language, it is still more proficient than you are as a beginner. It lowers the cost of experimentation. You can run a "controlled experiment" where you build the same feature in three different stacks over a weekend and actually see which one performs better, rather than just guessing based on a blog post you read.
That is where the real power is. It is not just about following the AI's bias; it is about using the AI to overcome your own human bias. We are seeing startups pivot their entire backend in a week because the AI showed them a better way to do it in a language they didn't previously know. It’s about agility. If you can change your entire stack in a weekend, you’re no longer "married" to your technology. You’re just dating it.
We saw a Series A startup do exactly that recently. They were a Django shop, but their AI tools were much better at generating the modern Next.js/Server Actions pattern. They did a full migration in two weeks that would have taken six months in the pre-AI era. The speed of iteration is just on another level. They didn't do it because they hated Python; they did it because the AI was a better partner in the new ecosystem.
But what about the "monoculture" risk? If everyone is using the same AI-friendly stacks, doesn't the web just become one giant, beige, predictable block of code? If every site is Next.js, Tailwind, and Prisma, aren't we losing the soul of the web?
There is a risk of that. We’re seeing a lot of "AI-looking" websites—very clean, very functional, but very soul-less. But I think we will see a counter-movement. As the "boilerplate" becomes a solved problem, the value of "human-crafted" or "highly-optimized" code goes up. That is why Astro is so key. It allows for that "beige" AI-generated logic to sit inside a very high-performance, expert-architected container. The human sets the boundaries, the AI fills the boxes.
It is like the difference between a house built with pre-fab walls versus a custom-built artisan home. You use the pre-fab for the stuff that doesn't matter—the interior studs, the insulation—and you spend your time on the beautiful fireplace or the custom windows. You use the AI for the boring CRUD operations and you spend your human time on the "wow" factor.
And the "windows" in web dev are the interactions, the animations, the unique data visualizations—the things that an AI still struggles to get "just right." An AI can give you a chart, but it can't give you a chart that feels emotional or perfectly aligned with a brand's specific "vibe."
So, for the listeners, what are the big takeaways from Daniel’s prompt? First, I would say: acknowledge the bias. Your AI isn't a neutral tool; it is a product of its training data. If it is pushing you toward a certain stack, understand that it's doing so because it’s "comfortable" there, not necessarily because it’s the best choice for your specific project.
And second, lean into "agent-friendly" architectures. If you want to get the most out of your AI, don't fight it. Use TypeScript, use explicit schemas, use modular components. Even if you aren't using an agent today, you will be tomorrow, and your future self will thank you for having a codebase that a model can actually reason about. A "clean" codebase in 2026 is an "AI-readable" codebase.
And third, don't be afraid to be the architect of a language you don't fully speak yet. Use the agents to broaden your horizons. If you have a problem that is best solved by a graph database or a low-level systems language, don't let your lack of experience stop you. The agent is your senior engineer in a box. It’s the ultimate "fake it till you make it" tool, provided you have the architectural sense to know when it’s lying to you.
Just make sure you are auditing that senior engineer. Don't let the "Black Box" become a "Black Hole" for your technical debt. You still need to understand the why, even if the AI is doing the how. If you can't explain why the AI chose a specific library, you haven't finished the task.
Well said. I think we have covered a lot of ground here, from the rise of Astro to the "Autocomplete Trap." It is a wild time to be a developer. You are less of a bricklayer and more of a city planner now. You're moving pieces around on a map instead of laying every stone yourself.
It is a massive shift in identity. Many developers are struggling with it. They feel like they're losing their "craft." But I'd argue the craft is just moving up the stack. It’s moving from "how do I write this loop" to "how do I structure this entire system to be resilient and scalable." And it is only going to accelerate as models like Gemini and Claude get more context and better reasoning.
Speaking of which, we should probably wrap this up before the AI decides it doesn't need us to host the show anymore. I can already hear the synthetic voices warming up in the server room.
It’s getting close, Corn. It’s getting close. I checked the logs this morning and the "Herman-Bot" is already 90% as sarcastic as I am.
(laughs) Only 90%? We’ve still got a few months of job security then. Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes and making sure the AI doesn't accidentally delete the podcast.
And a big thanks to Modal for providing the GPU credits that power the generation of this show. They make the agentic future possible, one inference at a time. Without that compute, we'd just be two guys talking to a wall.
This has been My Weird Prompts. If you are finding these deep dives helpful, or if you're currently fighting your AI to let you use a niche framework, do us a favor and leave a review on your podcast app. It really does help people find the show in this sea of AI-generated content.
We appreciate you listening. Keep prompting, keep building, and try not to get trapped in the React Loop.
Catch you in the next one.
See ya.