Daniel sent us this one, and it's something he's been thinking about for a while. He's asking about the history of visual programming languages, where the paradigm actually started, and what the major tools in this space look like across different domains. He mentions that we've touched on ladder logic before in the context of PLCs, and now he's seeing the same basic idea resurface in agentic AI tools like n8n and its equivalents. The question he's really circling is whether visual programming can coexist with code-first development without sacrificing flexibility, especially for automations and workflows. And he's upfront about where he's landed personally: over time he's leaned toward the critical side, feeling like natural language is just a better interface than dragging nodes around a canvas.
That tension is real and it's not new, which is part of what makes this interesting. The frustration Daniel's describing, the power user hitting a wall with a visual tool, that's as old as visual programming itself.
Yet the tools keep coming. Every few years someone relaunches the idea with a shinier interface and calls it a revolution. By the way, today's episode is powered by Claude Sonnet four point six.
Our friendly AI down the road, doing the writing while we do the talking.
Somebody's got to. So the no-code and low-code wave has genuinely brought this debate back into sharp focus, and I think the reason it feels urgent right now is that the stakes are higher. We're not just talking about automating a factory floor or wiring up a scientific instrument. We're talking about people building AI workflows, production pipelines, integration layers between systems that handle real data at scale. The question of whether you can do that seriously with a node graph, that's not a beginner question anymore.
It really isn't. And what I find fascinating is that the core tradeoff has barely moved in fifty years. The accessibility argument versus the flexibility argument. Those two things have been in tension since the first time someone tried to make programming approachable to a non-programmer, and every generation of tooling sort of rediscovers the same friction.
Which is either evidence that the problem is hard, or evidence that nobody's actually solved it yet and we keep pretending they have.
Probably both, honestly.
Let's actually define the thing before we pick it apart. What are we talking about when we say visual programming?
At its core, it's any system where you construct programs by manipulating graphical elements rather than writing text. Nodes, wires, blocks, diagrams, flowcharts. The idea is that the structure of the program becomes visible as a spatial layout, and you reason about logic by looking at a picture rather than reading syntax.
Which sounds appealing until you've spent forty minutes untangling a spaghetti mess of wires that made perfect sense when you started.
The spaghetti problem has a name, actually. People in the field call it "wiring hell," and it's been documented going back to the earliest serious visual environments. But the origins of this are industrial. Ladder logic, which Daniel mentioned, came out of the programmable logic controller world in the early nineteen seventies. The whole point was that electrical engineers already understood relay circuit diagrams, so if you could make the programming language look like a relay diagram, you didn't need to retrain anyone. It was an interface decision, not a theoretical one.
Practical to the point of being almost accidental as a paradigm.
Nobody sat down and said "let's invent visual programming." They said "electricians understand this notation, so let's use it." And that pragmatism is actually baked into why the paradigm keeps resurfacing. Every time there's a new class of user who isn't a programmer but needs to build something, someone reaches for a visual metaphor.
Now that class of user is, apparently, everyone with an AI workflow to automate.
Which is where Node-RED and n8n come in. Node-RED was released by IBM in twenty thirteen, originally for wiring together Internet of Things devices. The node-and-wire metaphor made sense there for the same reason ladder logic made sense on factory floors. But it got picked up far outside IoT, and now you've got tools like n8n doing essentially the same thing for general workflow automation, business integrations, agentic pipelines.
The metaphor traveled further than the original problem.
That's where the history gets interesting, because the metaphor traveling is not the same as the metaphor scaling. Ladder logic worked because the domain was constrained. You had discrete inputs, discrete outputs, relay coil states, contact logic. The visual representation had a one-to-one correspondence with physical hardware. Every rung of the ladder was a real circuit you could trace with your finger.
The picture was actually the thing, not an abstraction of the thing.
And that fidelity is what made it stick. When you're programming a PLC to control a conveyor belt, you have maybe a few dozen I/O points, some timers, some counters. A ladder diagram fits on a screen. The complexity ceiling of the domain and the complexity ceiling of the notation were matched.
The trouble being that most interesting problems don't have a matched complexity ceiling.
Which is exactly what LabVIEW ran into. LabVIEW came out of National Instruments in nineteen eighty-six, and the pitch was compelling. Scientists and engineers were running experiments, collecting data from instruments, doing analysis. They weren't software developers, but they needed to automate measurement workflows. So LabVIEW gave them a dataflow visual language where you wire together functional blocks. Data flows through the wires. The diagram is the program.
For a voltage measurement feeding into a Fourier transform feeding into a graph, that's actually a reasonable mental model.
It's a great mental model for that. The problem is that LabVIEW became the tool for everything at a lot of labs, not just simple data pipelines. And once you're building complex state machines, handling error conditions, managing concurrent processes, the diagram stops being readable. You get what LabVIEW programmers call "spaghetti code" in the visual sense, which is arguably worse than textual spaghetti because you can't grep it.
You can't ctrl-F a wire.
And that's a non-trivial limitation. With text-based code, even badly structured code, you have tools. Search, version control diffs that mean something, refactoring support. With a complex LabVIEW diagram, your diff is two screenshots.
Which is either charming or horrifying depending on how much you care about code review.
I spent enough years in medicine reading handwritten charts to know that "visual" does not automatically mean "clear.
Fair point from the retired pediatrician.
The other thing that happened in the nineties that's worth naming is Scratch, though Scratch came a bit later, MIT Media Lab released it in two thousand three. But the conceptual groundwork, the idea of visual programming as a pedagogical tool, that was being laid through the nineties with environments like Logo and HyperCard. The insight was different from LabVIEW's. It wasn't "match the notation to the user's existing mental model." It was "remove syntax as a barrier entirely so beginners can focus on logic.
Which is a different goal. One is about domain experts who aren't programmers, the other is about future programmers who aren't yet.
Scratch succeeded at its goal. Studies have shown it dramatically lowers the barrier to computational thinking for kids. But the design decisions that make it excellent for a ten-year-old learning loops are the same decisions that make it useless for building anything real. There's no file I/O in the traditional sense, no network access, no package ecosystem. The constraints are features in the classroom and blockers everywhere else.
You have two early archetypes. LabVIEW, which targets domain experts and scales badly. Scratch, which targets beginners and doesn't try to scale at all. And both are honest about their tradeoffs in a way that a lot of modern tools aren't.
That's the part that bothers me about the no-code framing, honestly. LabVIEW never claimed you could build enterprise software with it. It claimed you could automate lab instruments. Scratch never claimed to be production-ready. But some of the modern platforms have marketing that implies the flexibility ceiling doesn't exist, or that it's much higher than it actually is.
Power users believe it long enough to build something substantial before they hit the wall.
At which point you're either rewriting in code or you're bolting on escape hatches, and either way the visual layer starts to feel like it was working against you the whole time. So what does hitting that wall actually look like in practice?
The abstract version of this conversation is easy to follow, but the lived experience is where the frustration really comes from.
N8n is a good place to start because Daniel specifically mentioned it and it's become popular for agentic AI work. The basic pitch is solid. You have a canvas, you drag in nodes for different services, Slack, Postgres, an HTTP endpoint, an OpenAI call, you wire them together, and you have a workflow. For a linear pipeline with maybe five or six nodes, it's faster to build visually than it would be to write the equivalent code from scratch.
That's the honeymoon phase.
The honeymoon is real. I don't want to dismiss it. For people who aren't developers, being able to see the data flow between steps, being able to inspect what came out of one node before it goes into the next, that's useful. The visual representation earns its keep at low complexity.
Automation problems rarely stay at low complexity.
They really don't. The moment you need conditional branching that depends on more than one variable, or you need to loop over a collection and do something different based on an item's properties, or you need to handle errors in a way that's specific to a particular node's failure mode, the canvas starts working against you. In n8n, complex branching means a sprawl of nodes that don't have an obvious reading order. You're scanning the canvas spatially trying to reconstruct the control flow in your head.
Which is the same problem LabVIEW had, just with a different coat of paint.
And what n8n does, to its credit, is offer escape hatches. There's a code node where you can drop into JavaScript. But once you're writing JavaScript inside a node inside a canvas, you have to ask what the canvas is actually doing for you anymore. You've got the overhead of the visual layer and the limitations of an embedded scripting environment that doesn't have access to a proper package ecosystem.
You're paying the cost of both paradigms without getting the full benefit of either.
That's exactly the trap. And Node-RED has the same dynamic, though its history is slightly different because it stayed closer to its original IoT use case for longer. IBM released it in twenty thirteen, originally for connecting hardware devices, sensors, actuators, message brokers. The MQTT integration was first-class, the hardware abstraction was well-designed. It was doing what ladder logic did, matching the notation to a constrained domain where the visual metaphor had real fidelity.
Then the general-purpose use cases crept in.
They always do. Node-RED now gets used for home automation, web scraping, data transformation, things that are technically possible but where the node graph is not the natural shape of the solution. And the community has responded by building enormous libraries of contributed nodes, which is impressive as an ecosystem but creates its own problems. You're now depending on third-party nodes with inconsistent quality, varying levels of maintenance, and no guarantee that two nodes handle errors the same way.
The composability that looked like a feature becomes a liability.
At scale, yes. The thing that makes visual tools feel powerful early on, the ability to snap together pre-built components without understanding their internals, is the same thing that bites you when one of those components behaves unexpectedly and you can't easily inspect what's happening inside it.
Daniel's position, that natural language is a better interface than the canvas, I find it compelling but I want to push on it a bit. Because natural language has its own failure modes in this space.
It does, and I think the honest version of the argument isn't "natural language is better" but "natural language has different tradeoffs." When you describe a workflow in natural language to an AI agent, you get something generated that you then have to verify. The generation step is fast, but the verification step is non-trivial, especially if you're not a developer who can read the output and judge whether it's correct.
You've traded visual ambiguity for textual ambiguity. The spaghetti is just in the generated code now instead of on the canvas.
It's arguably harder to inspect. A node graph, for all its problems, is at least a representation you can navigate spatially. Generated code that you don't fully understand is a black box with a natural language label on it.
Although the counterargument is that generated code can at least be diffed, searched, put into version control properly.
Which is real. And I think that's where the natural language case is strongest, not as a replacement for understanding, but as a way of producing something that lives in the same tooling ecosystem as everything else you build. If I describe a workflow and get Python back, I can lint it, test it, review it, commit it. The artifact lives in a world I know how to work with.
Versus an n8n workflow that lives in a JSON blob that you export and hope you never have to merge with someone else's changes.
Version controlling n8n workflows is painful. The JSON export is technically a diff-able format, but in practice the diffs are unreadable because the node positions and IDs change in ways that have nothing to do with the logic. It's one of those things that sounds like it should be solved and isn't.
The resurgence of visual programming in the no-code era has brought back all the old problems plus some new ones specific to the collaboration and maintenance side.
I think that's the underappreciated part of Daniel's critique. It's not just about what you can build, it's about what you can maintain, hand off, debug at two in the morning when something breaks in production. Code-first development has decades of tooling built around those problems. Visual tools are still largely solving the initial build experience and not the ongoing operational experience.
Which is why power users keep bouncing back to code even when they started with a visual tool.
The pattern I've seen is: start visual because it's fast, hit a complexity ceiling, add escape hatches, accumulate technical debt in the escape hatches, eventually rewrite the whole thing properly. And the visual layer becomes something you maintain out of inertia rather than conviction. So what's the alternative? What should builders actually do?
Yeah, that's exactly where I was going—what's the actual advice? Because someone listening to this who's currently building automations has to make a real decision about tooling right now.
The honest answer is that it depends on where you are in the complexity curve, and being realistic about where you're going to end up. If you're connecting three services in a linear pipeline and you don't expect that to grow, a visual tool is a completely defensible choice. The build time is faster, the maintenance burden is low because there's not much to maintain, and the visual representation helps non-technical stakeholders understand what the thing does.
Which is underrated, actually. Showing someone a node graph and saying "this is your workflow" lands differently than showing them a Python script.
Communication value is real. But the moment you're building something that needs to grow, or that other people need to own, or that has to survive contact with edge cases in production, you want to be honest with yourself early about whether the visual layer is serving you or just deferring the complexity.
The hybrid approach, which is what a lot of power users end up at anyway, there's a way to do that intentionally rather than accidentally.
Right, and the intentional version looks different from the accidental version. Accidental hybrid is what we described earlier: you start visual, you bolt on code nodes, you end up with something that's neither. Intentional hybrid means you use the visual layer for what it's good at, orchestration, routing, connecting discrete services, and you push any real logic into code that lives outside the canvas and gets called as a function. The visual tool becomes a thin coordination layer rather than the place where your business logic lives.
The canvas is the map, not the territory.
Some tools support this better than others. n8n's code node is a partial answer but it's still embedded inside the canvas. Something like Temporal, or even just a Python script with well-defined interfaces, gives you the logic in a place where you have proper tooling and the visual representation, if you want one, becomes a diagram you generate rather than a diagram you execute.
On the natural language side, I think Daniel's instinct is right directionally but the timing matters. The tools that generate code from natural language descriptions are improving fast, but the verification gap is still real.
The verification gap is the thing I keep coming back to. If you describe a workflow and get working code back, and you can read that code and confirm it does what you intended, natural language has won that round. But if you can't read the output, you've outsourced your understanding along with the implementation, and that debt comes due eventually.
The prerequisite for natural language as a viable interface is still some baseline ability to evaluate the output. Which means it's not yet the thing that replaces coding knowledge, it's the thing that accelerates people who already have it.
For now, yes. The trajectory suggests that verification tooling will improve, that AI-assisted review will get better at catching semantic errors and not just syntax errors. But we're not there yet in a way I'd stake a production system on.
Which is maybe the practical summary: visual tools for bounded, communicable, low-complexity workflows; code-first for anything that needs to scale or be maintained seriously; and natural language as an accelerant for people who already understand what they're building, not as a replacement for that understanding.
That's roughly where I land. And I'd add: be suspicious of any tool that promises to make the ceiling disappear. The ceiling moves, but it doesn't go away.
That's the thing no vendor will put in their marketing copy.
Definitely not on the landing page.
The question I want to leave open, because I don't think we have a clean answer, is whether the next generation of visual tools actually learns from this history or just repeats it with better aesthetics. Because every wave of visual programming has launched with the premise that this time the abstraction is right, this time the ceiling is high enough. Ladder logic was right for PLCs. LabVIEW was right for instrumentation. n8n is right for linear workflows. And each time, the ceiling turns out to be lower than advertised when the complexity arrives.
The optimistic read is that natural language interfaces change the equation, not because they eliminate complexity but because they shift where it lives. If the hard part of visual programming is that the notation doesn't scale, and natural language can generate notation that does scale, you've maybe broken the cycle. But only if the verification problem gets solved alongside it.
Which is a meaningful if.
I'm uncertain about the timeline on that. The generation side has moved faster than I expected. The verification side is harder because it requires the tool to have a model of your intent that's precise enough to check its own output against, and that's a different problem.
Daniel's framing, that natural language is a better interface than dragging nodes on a canvas, I think he's right about the direction. I'd just say: the interface is only as good as what you do with the output.
And I think that's where the history of visual programming is actually instructive, not as a cautionary tale exactly, but as evidence that the interface is never the whole answer. The people who built useful things with LabVIEW understood what they were building. The people who built useful things with ladder logic understood the hardware. The interface lowered the barrier, it didn't eliminate the need for understanding.
The tool meets you where you are. It doesn't take you further than you can go.
That's the honest pitch for all of them.
Big thanks to Hilbert Flumingtop for keeping the whole operation running, and to Modal for the serverless GPU infrastructure that powers the pipeline behind this show. This has been My Weird Prompts. If you've been enjoying the show, a review wherever you listen goes a long way. We'll see you next time.