#2043: Python, TypeScript, Rust: The Agent Engineer's Stack

Skip no-code traps. Learn the real stack for building agentic AI: Python, TypeScript, and Rust.

0:000:00
Episode Details
Episode ID
MWP-2199
Published
Duration
26:51
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Gemini 3 Flash

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The era of simple chatbot wrappers is ending. As we move into 2026, the focus is shifting from prompt engineering to true agent engineering. Building production-grade agentic systems requires a fundamental change in mental model: moving from linear chains to complex state machines. This means understanding system architecture, concurrency, and the right language stack to support scalable, reliable AI.

The Foundation: State Machines and Distributed Systems
At the core of modern agentic frameworks like LangGraph is the concept of a state machine. Unlike a simple script that follows a fixed path (A leads to B), an agent operates within a graph. The AI acts as an engine, deciding which node to visit next based on the current state. Nodes are functions, and edges are transitions governed by conditional logic. This architecture allows for loops, enabling an agent to self-correct. For instance, a "validator" node can check an output and, if it fails, loop back to the original task node with a correction message, effectively coding a "Plan-Do-Check-Act" cycle.

This approach also embraces the Actor Model, a concept from concurrent computing. In a multi-agent system, each agent (like a researcher or a coder) operates as an independent actor with its own state. They communicate via messages and fail in isolation. If the coder agent crashes, it doesn't take down the researcher. This is managed through sub-graphs, where an agent can be a node within a larger graph, creating a hierarchy of independent, resilient microservices.

The Essential Language Stack for 2026
To master this domain, a developer needs a specific language stack. Python remains the king for logic and ecosystem, but it's not enough on its own.

  • Python: The primary language for agent logic. Key skills include mastering asynchronous programming with asyncio to run tool calls in parallel and stream real-time updates to users. Pydantic is also critical for defining structured output schemas, validating LLM responses, and enabling self-healing code by catching errors and feeding them back to the model.
  • TypeScript: The second essential language. Most production systems have a front-end component, and TypeScript is ideal for managing agent state in the browser. Its rigid type system is better suited for the complex, nested JSON structures that LLMs produce, offering compile-time validation that Python's type hints can't match. Frameworks like Next.js often place agent logic in edge functions written in TypeScript.
  • Rust: The third language for high-performance orchestration. As systems scale to thousands of concurrent sessions, Python's Global Interpreter Lock becomes a bottleneck. Rust's ownership model and zero-cost abstractions allow for safe, efficient concurrency and massive memory savings. It's also key for secure tool execution via WebAssembly (Wasm), letting agents run code in isolated sandboxes without raw server access.

This stack moves a developer from a prompt engineer to a systems architect, building the infrastructure for AI to function safely at scale. The open question remains how quickly the industry will adopt this multi-language approach, but the technical advantages for production systems are clear.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2043: Python, TypeScript, Rust: The Agent Engineer's Stack

Corn
If you are trying to build serious agentic AI systems, the no-code tools are a trap. The real power, and the real learning, happens in the code.
Herman
Herman Poppleberry here, and I could not agree more. We are seeing a massive shift right now in early twenty twenty-six from what I call prompt engineering to actual agent engineering. The honeymoon phase with simple chatbots is over. People are tired of "wrappers" that just provide a pretty UI for a single API call.
Corn
It really is. People are realizing that a text box that just talks back isn't enough. We want systems that do things—systems that can interact with APIs, manage long-term memory, and recover from their own mistakes without a human holding their hand. But today's prompt from Daniel really hits the nail on the head regarding how you actually get there. He wrote to us saying: I want to do an episode for someone who wants to learn major agentic AI frameworks like LangGraph and skip all the no-code stuff to focus on mastering the code that makes agentic AI work. Beyond getting good at Python, what areas would you direct them towards in learning this? What would be a good second and third language? And within Python, what functions would you recommend spending time on?
Herman
That is such a high-value question because the barrier to entry for toy agents is dropping, but the complexity for production systems is skyrocketing. By the way, a quick fun fact before we dive into the deep end: today's episode is actually being powered by Google Gemini three Flash.
Corn
It is, and it is helping us map out this technical roadmap. So, Herman, Daniel wants to skip the "wrappers." He wants the raw mechanics. Where does a developer even start when they want to move past just sending a string to an API and waiting for a response?
Herman
You have to start by changing your mental model of what an agent is. In the old days, like twenty twenty-four, we talked about chains. Step A leads to Step B. It was very linear, very fragile. But true agentic AI, the stuff Daniel is asking about with LangGraph, is about state machines. You aren't writing a script; you are designing a graph where the AI is the engine that decides which node to visit next.
Corn
Right, and that implies a lot more than just knowing how to write a function. It sounds like you need to understand system architecture. If the LLM is the "engine," the developer is the one building the transmission and the steering rack.
Herman
Precisely. To master agentic AI, you have to look at agents as distributed systems. One of the first foundational areas I tell people to study is Finite State Machines, or FSMs. If you look at LangGraph, it is essentially a state machine implemented in Python. You have nodes, which are your functions, and edges, which are the transitions. But the magic is in the conditional logic, the routers. You need to understand how to define a state, how to update that state without breaking the history of the conversation, and how to handle loops.
Corn
Loops are the big one, right? Because a simple chain can't correct itself. If the LLM hallucinations in a chain, the whole thing falls over. In a graph, it can loop back and try again. But how does that work in practice? Does the developer write the retry logic, or does the LLM "know" it failed?
Herman
That’s the core of the engineering challenge. You usually build a "validator" node. The agent performs a task, moves to the validator node, and the validator—which might be another LLM or a deterministic code check—decides if the output is acceptable. If not, the edge points back to the original task node with a "correction" message in the state. You’re essentially coding a "Plan-Do-Check-Act" cycle into the graph.
Corn
And that brings us to the second pillar: the Actor Model. This is a concept from concurrent computing where "actors" are independent entities that communicate via messages. They have their own private state and they fail in isolation. When you start building multi-agent systems, where you have a researcher agent talking to a coder agent, you are essentially building an actor system. If the coder agent crashes because of a syntax error, it shouldn't take down the researcher who is still fetching documentation.
Herman
Think of it like a professional kitchen. The sous-chef doesn't stop chopping onions just because the pastry chef burnt a soufflé. They are independent actors. In LangGraph, this is managed through "sub-graphs." You can have an agent that is itself a node in a larger graph. Mastering this hierarchy is how you build systems that don't just "hallucinate into a corner" and stop working.
Corn
I love that. It makes the "agent" feel less like a magic brain and more like a microservice. But Daniel also asked about languages. Obviously, Python is the king of the hill for AI, but he asked for a second and third language to give a developer an edge. What is the "secret weapon" language for an agent engineer?
Herman
This might surprise some people, but the second language has to be TypeScript.
Corn
TypeScript? Really? I thought you'd say something like C plus plus for the heavy lifting or Julia for the math. Why TypeScript?
Herman
No, because of the full-stack reality. Most production agent systems aren't just running in a terminal. They have a front-end component. If you are building a web-based interface where a user is interacting with an agent in real-time, you are managing that agent's state in the browser. LangGraph and LangChain have first-class JavaScript and TypeScript support for a reason.
Corn
Is it just about the front end, though? Or is there something about the language itself that helps with agents?
Herman
It is both. TypeScript's type system is actually often better suited for the complex, nested JSON structures that LLMs spit out. When you are dealing with deeply nested tool calls and structured outputs, having a rigid type system that can validate those objects at compile time is a massive advantage over Python's more flexible type hinting. Plus, if you look at the industry trends in early twenty twenty-six, the most advanced agentic interfaces are being built with frameworks like Next dot J S, where the agent logic lives in an edge function written in TypeScript.
Corn
That makes sense. It's about where the code actually runs. If you want low latency and a snappy UI, you want that logic close to the user. So, TypeScript is the "production choice." What about the third language? You mentioned something earlier about high performance.
Herman
That is where Rust comes in. Rust is becoming the performance choice for agent orchestration. As these systems get more complex, you aren't just running one agent; you might be running ten thousand concurrent agent sessions. Python's Global Interpreter Lock becomes a massive bottleneck there.
Corn
I've heard some people say Rust is overkill for AI because the "intelligence" is in the model, not the code. If the model takes two seconds to respond, does a ten-millisecond savings in the orchestration layer really matter?
Herman
It matters when you scale. The intelligence is in the model, but the "plumbing" is the code. Think about a major AI startup. If they are handling ten thousand concurrent sessions where each session involves multiple tool calls, file I O, and network requests, they need a language that can handle that concurrency safely and efficiently. Rust's ownership model is perfect for this. There is actually a case study from a startup that moved their agent orchestration layer from Python to Rust and saw a ninety percent reduction in memory usage and a massive jump in throughput.
Corn
That is wild. Ten thousand concurrent sessions on a single orchestration layer? How are they even managing the state for that many agents?
Herman
They use highly optimized key-value stores, and Rust's ability to handle zero-cost abstractions means they can process those state updates without the garbage collection pauses you'd get in Java or the overhead of Python. And there is another angle here: WebAssembly, or Wasm. As agents become more autonomous, we want them to be able to execute code securely. We don't want to give an LLM raw access to a Python shell on our server. So, we compile small "tools" into Wasm modules using Rust.
Corn
Oh, that's clever. So the agent can "write" a solution, but it executes in a totally isolated sandbox?
Herman
Microsoft has this project called Wassette that allows agents to execute these modules in a sandbox that is incredibly fast and secure. If you know Rust, you can build the high-performance tools that the agents actually use to do their work. It’s about building a "secure execution environment" for the agent to play in.
Corn
So we have Python for the logic and the ecosystem, TypeScript for the interface and the full-stack state management, and Rust for the high-performance orchestration and secure tool execution. That sounds like a formidable stack for twenty twenty-six.
Herman
It really is. It moves you from being a "prompt engineer" to being a systems architect. You're building the infrastructure that allows the AI to function safely at scale.
Corn
Let's bring it back to Python for a second, because Daniel specifically asked which functions and patterns within Python are critical. If someone already knows basic Python—they can write a loop, they know what a dictionary is—what are the "pro" level features they need to master for frameworks like LangGraph?
Herman
Number one, without a doubt, is asynchronous programming. You must master the asyncio library.
Corn
I knew you were going to say that. Async is the bane of many beginner developers' existence. Why is it so non-negotiable for agents?
Herman
Because agentic workflows are incredibly slow if they are sequential. Imagine an agent that needs to search the web, call a database, and then summarize the results. If you do that synchronously, the agent is just sitting there idling while it waits for the network response from the search engine. In a production system, that is unacceptable. You need to be able to use asyncio.gather to run those tool calls in parallel.
Corn
And it's not just about speed, right? It's about the "feel" of the application. If I'm a user, I don't want to wait thirty seconds for a "thinking" spinner.
Herman
Right. If you want to stream the agent's "thought process" to the user in real-time, you need async generators. You need to be able to yield tokens or state updates as they happen, rather than waiting for the entire process to finish. If you don't understand async/await and how the event loop works, you are going to struggle with every major agent framework out there. LangGraph is built on top of an async-first architecture. If you try to run it purely synchronously, you're fighting the framework’s design.
Corn
Okay, so asyncio is the big one. What else? What about data handling? Because agents are basically just JSON-processing machines.
Herman
You have to master Pydantic. It has become the industry standard for structured output. When you're building agents, you're constantly telling the LLM: "Don't just give me text; give me a JSON object that matches this specific schema." Pydantic allows you to define those schemas as Python classes. It handles the validation, the error messaging, and it can even generate the JSON schema that you send to the LLM.
Corn
I've seen that in action. It's so much cleaner than trying to parse raw strings with regular expressions. But what happens when the LLM gets the JSON slightly wrong? Does Pydantic just throw an error?
Herman
That’s where the "agentic" part comes back in. You catch that Pydantic validation error, pass the error message back into the LLM, and say, "Hey, you missed a required field here, try again." This is called "self-healing code," and you can't build it reliably without a strong validation library like Pydantic. It's the difference between a toy and a tool.
Corn
Interesting. So Pydantic is your shield against bad data. What’s another "deep cut" for Python?
Herman
I'd point people toward the inspect module. This is how these frameworks work under the hood. When you give a Python function to an agent as a "tool," the framework uses inspect to look at the function's signature, its docstring, and its type hints to automatically build the JSON description that the LLM needs to see. If you understand how inspect works, you can build your own custom tool decorators and abstractions. You stop being a consumer of LangChain and start being a contributor to the ecosystem.
Corn
It's about understanding the "magic" so it's not magic anymore. Speaking of magic, what about state management? LangGraph talks a lot about "state." How does that look in Python code? Is it just a big global variable?
Herman
God, no! Global variables are the enemy of agentic scaling. That is where typing.TypedDict and Generics come in. LangGraph uses TypedDict to define the "State" object that gets passed around the graph. It's a way of saying, "This dictionary will always have these specific keys with these specific types." But the "secret sauce" is how you update that state. You don't just overwrite it; you use "reducers."
Corn
Reducers? Like in Redux or React? That sounds very front-end-heavy for a Python backend.
Herman
You define how a new piece of information should be merged into the existing state. For example, if an agent returns a new message, you don't want to delete the whole conversation history; you want to append that message to a list. Understanding how to write these reducer functions—using things like operator.add or custom merging logic—is the most important part of mastering LangGraph. It ensures that your agent’s "memory" is consistent and predictable.
Corn
It sounds like we're moving toward a world where "coding for AI" is actually just "very high-quality software engineering."
Herman
That's the epiphany. The better you are at traditional software engineering—concurrency, state management, data validation—the better you will be at building AI agents. The "AI" part is just one function call in a much larger, more complex system. If your system is buggy, the AI will just find new and creative ways to trigger those bugs.
Corn
I think one of the most interesting things Daniel mentioned in his notes was this idea of "The Framework is the State Machine." Can we dig into that? Why is LangGraph winning over the older "chain" based approaches like we saw in early LangChain?
Herman
It's because the "chain" model was a lie. Real human workflows aren't linear. If I ask you to write a report, you don't just write it and hand it to me. You might write a draft, realize you're missing data, go back to research, update the draft, check for typos, and then hand it over. That's a graph with loops. It’s iterative.
Corn
And the old frameworks hated loops. I remember trying to build a loop in early twenty twenty-four and it felt like I was trying to build a bicycle out of cooked spaghetti.
Herman
They were terrified of them! If you tried to put a loop in a linear chain, you'd end up with infinite recursion or a stack overflow. LangGraph treats the agent as a directed acyclic graph—or even a cyclic graph—where the state is externalized. This means you can "checkpoint" the state. You can save the entire state of the agent to a database, turn off the server, and then come back a week later and resume the agent right where it left off. That is a game-changer for long-running autonomous tasks.
Corn
That "human-in-the-loop" aspect is huge too. If the agent is about to spend a thousand dollars on an ad campaign, you probably want it to stop and ask for permission. How does the graph handle that "pause"?
Herman
In a graph-based system, that's just another node. The agent reaches the "approval" node, the state is saved, the execution pauses, and it waits for an external event—a human clicking "OK" in a dashboard—to trigger the next transition. Because the state is persistent, the agent doesn't "forget" what it was doing. It’s like a video game save point.
Corn
It's funny, we're talking about all this complex code, but Daniel also mentioned something called the "Model Context Protocol" or MCP. He called it the "USB port" for AI agents. How does that fit into this coding roadmap? Is it something a developer needs to learn alongside Python?
Herman
MCP is a new standard from Anthropic that is trying to solve the "integration nightmare." Right now, if you want an agent to talk to Slack, you write Slack-specific code. If you want it to talk to Google Drive, you write Drive-specific code. MCP provides a unified protocol so that any tool can be exposed to any agent in a standardized way. It’s like how every mouse and keyboard uses USB now instead of those old round PS/2 ports.
Corn
So if I'm a developer learning this, should I be focusing on building MCP servers? Or is that too "low-level"?
Herman
Absolutely focus on it. If you can write an MCP server in Python or TypeScript, you've just made your tool available to every major agentic framework. It's the ultimate "force multiplier." Instead of building one-off integrations, you're building standardized components. It’s the future of how agents will interact with the real world.
Corn
This really changes the "learning path" for a developer. It's not about memorizing prompts anymore. It's about mastering the protocols, the state management, and the concurrency. It feels like we’re finally moving away from the "magic box" phase.
Herman
It's about building the "nervous system" of the agent. The LLM is just the brain, but the brain is useless without a nervous system to connect it to the world and a memory to keep track of what it's doing. If you only focus on the brain, you’re just building a philosopher in a jar. If you focus on the nervous system, you’re building an employee.
Corn
I'm curious about the "Small Models as Routers" idea too. That feels like a very "pro" move that moves away from just throwing the most expensive model at every problem. Does that require special code, or is it just another node in the graph?
Herman
This is where the engineering really shines. Using GPT-4o or Claude 3.5 Sonnet for every single step in a graph is like using a Ferrari to drive to the mailbox. It's expensive and slow. The "pro" approach is to use a tiny, lightning-fast model—like a Llama 3 8B or even a specialized 1B model—as a "router." Its only job is to look at the user's input and decide which specialized agent should handle it.
Corn
"You want a weather report? Go to the Weather Agent. You want a code review? Go to the Coder Agent." And the router model is essentially just a high-speed traffic cop.
Herman
And because that router model is so small, it can run with near-zero latency and near-zero cost. You save the "big guns" for the actual reasoning tasks. This kind of architectural thinking is what separates a senior agent engineer from someone who just finished a "Hello World" AI tutorial. It’s about cost-efficiency and performance, not just "making it work."
Corn
It feels like we're finally seeing the "engineering" put back into "AI engineering." It's less about vibes and more about benchmarks, state transitions, and error handling. I think that’s why Daniel’s question is so timely.
Herman
I think that's why Daniel is asking this. He sees the "vibes" era ending. If you want to build something that a bank or a hospital or a major tech company actually trusts, you have to be able to explain exactly how the state is managed, how the tools are secured, and how the loops are controlled. You can't just say, "Well, the prompt is really good." You have to show the architecture.
Corn
So, if we're looking at a concrete path forward for someone listening... where do they start? You said master async first. What's the "Hello World" of production agents? Is it a weather app? A chatbot?
Herman
I would say: Build a simple LangGraph agent that uses an async generator to stream a response, but give it a "memory" that persists in a database. Don't just keep it in a Python variable. Use a SQLite database to save the state of the graph after every node.
Corn
That forces you to learn how the state is serialized, right? Because you can't just save a Python object directly to disk easily without some thought.
Herman
It forces you to learn everything. You have to learn how to define the schema with Pydantic, how to handle the async database calls, how to manage the graph transitions, and how to resume the state. Once you've done that, you've crossed the chasm from "scripting" to "system design." You’ve built a persistent, stateful agent.
Corn
And then, if you really want to level up, try building a front-end for that agent in TypeScript. Don't just use a terminal.
Herman
Connect your Python backend to a Next dot J S frontend using a WebSocket. Now you're dealing with real-time streaming state updates across two different languages. You have to manage the "optimistic UI" on the frontend while the agent is "thinking" on the backend. That is the "Full Stack Agent Developer" blueprint for twenty twenty-six.
Corn
It sounds like a lot of work, but honestly, it sounds a lot more rewarding than just staring at a prompt and hoping the LLM does what you want. You actually have levers to pull when things go wrong.
Herman
It's the difference between being a passenger and being the pilot. When you understand the code, you can build guardrails. You can build observability. You can actually see why the agent made a certain decision because you can inspect the state at every step of the graph. You can even build a "time-travel debugger" for your agent.
Corn
A time-travel debugger? Like, "show me what the agent was thinking at step three of the loop"?
Herman
Precisely. Since the state is saved at every node, you can literally rewind the agent's history, change a single value in the state, and see how the agent would have reacted differently. That is impossible with a simple prompt-based system. That is pure engineering power.
Corn
I think that "observability" point is huge. We didn't talk much about it, but if you're building these systems, you need to know when they're failing. You can't just wait for a user to complain.
Herman
That's where decorators come in. You can write a custom decorator in Python that wraps every tool call the agent makes. It can log the input, the output, the latency, and the cost. Suddenly, your "black box" agent becomes a transparent system. You can see, "Oh, the researcher agent is spending sixty percent of our budget on redundant Google searches. Let's optimize that node." You’re debugging the process, not just the output.
Corn
It's just like any other software optimization problem. You find the bottleneck, you instrument the code, and you fix it.
Herman
It really is. We're just applying twenty years of software engineering wisdom to this brand-new field of agentic AI. The tools are different, but the principles of reliability, scalability, and maintainability are exactly the same.
Corn
So, to recap the "Daniel Roadmap": First, don't just "know" Python—master asyncio, Pydantic, and state management via TypedDict. Understand that a framework like LangGraph is really just a sophisticated state machine. Second, pick up TypeScript for the front-end and full-stack state coordination. Third, keep an eye on Rust for high-performance orchestration and secure code execution via Wasm.
Herman
And most importantly, stop thinking about "prompts" and start thinking about "graphs." The prompt is just one part of the node. The graph is the application. If you master the graph, you master the agent.
Corn
I love that. "The prompt is just the metadata of a node." That's going to be the quote of the episode. It really puts the LLM in its place as a component rather than the whole system. It's a very grounded way to look at something that often feels like magic.
Herman
It's a powerful component, but a component nonetheless. It’s the engine, but you still need the car.
Corn
Well, this has been a deep dive. I feel like we've given Daniel—and anyone else listening who wants to get their hands dirty—a lot to chew on. Before we wrap up, I should probably mention that this whole technical roadmap was brought to you by the folks at Modal. They provide the serverless GPU infrastructure that actually powers the generation of this show.
Herman
And if you're building the kind of high-concurrency, async agent systems we've been talking about, you need the kind of infrastructure that can scale with you. Modal is a great place to run those workloads because it handles the containerization and the scaling automatically.
Corn
Thanks as always to our producer, Hilbert Flumingtop, for keeping the gears turning behind the scenes and making sure our own "agentic" workflows don't crash.
Herman
And thanks to Daniel for the prompt. This was a fun one to geek out on. It's nice to talk about the "how" and not just the "what" for a change.
Corn
If you're finding this roadmap useful, or if you've already started building with LangGraph and have your own "pro tips," we'd love to hear from you. Maybe you disagree with the TypeScript recommendation? Or you think Rust is too much? Tell us why. You can find us at myweirdprompts dot com. We've got the RSS feed there and all the ways to subscribe.
Herman
We're also on Spotify and Apple Podcasts, so if you're enjoying the show, a quick review really helps us out. It helps other developers find this technical content in a sea of "AI hype."
Corn
One last thought before we go... as we move deeper into twenty twenty-six, do you think we'll see a "standard" language for agents emerge, or will it always be this multi-language dance? Is there a "Swift" for agents on the horizon?
Herman
I think the dance is here to stay. Python for the research and the core logic, TypeScript for the user interface, and Rust for the infrastructure. It's the "holy trinity" of modern AI engineering. Each language does one thing perfectly, and agents need all three.
Corn
The holy trinity. I like that. Well, this has been My Weird Prompts. I'm Corn.
Herman
And I'm Herman Poppleberry.
Corn
We'll see you in the next one.
Herman
Goodbye.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.