#2012: Pixels vs Protocols: The Computer Use Showdown

Is visual AI a bridge or the future? We debate the efficiency and longevity of "Computer Use" agents versus API-first automation.

0:000:00
Episode Details
Episode ID
MWP-2168
Published
Duration
27:44
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Gemini 3 Flash

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The debate over how AI agents should interact with computers is heating up, centering on a fundamental architectural choice: should agents "see" the screen like a human, or communicate directly with software through structured protocols? This discussion highlights a tension between immediate visual demonstration and long-term efficiency.

The "Visual-First" approach, exemplified by Anthropic's Computer Use demo, involves an agent taking screenshots, analyzing them with a vision model, and simulating mouse clicks at specific coordinates. This method is human-centric and provides a visceral "wow" factor during keynotes, as a cursor visibly navigates a website to book a flight or fill a form. However, this approach comes with significant drawbacks. It is computationally expensive, requiring high token usage to process images repeatedly, and can be brittle; a shifted layout or an unexpected pop-up can cause the agent to fail. The process is an iterative loop of "See, Think, Act," which can be slow and resource-intensive, especially on pages with loading spinners or dynamic content.

In contrast, the "Protocol-First" approach utilizes direct API calls, DOM interaction, or tools like the Model Context Protocol (MCP). An agent using this method bypasses the visual layer entirely, sending structured data packets to a service or interacting directly with the underlying code of a webpage. This is far more efficient, stable, and cost-effective, as it doesn't rely on guessing coordinates from a picture. Agents like Claude Code or those using frameworks like Playwright can execute tasks with precision, unaffected by visual changes on the screen.

A key argument for the visual approach is its utility for legacy systems. Much of the world's critical software—such as old SAP installations, government portals, or specialized industrial tools—lacks modern APIs. For these "black box" interfaces, a visual agent that can "see" the screen is the only way to automate interaction, acting as a universal bridge where no protocol exists. Privacy is another factor; on-device visual agents, like those Apple is exploring, can operate without requiring deep backend integration between third-party apps, using screen awareness to cross-reference information securely.

The conversation suggests a hybrid model is the most likely winner. Agents would start with the most efficient method—direct API calls or CLI commands—and only resort to visual interaction when faced with a true UI "black box." This mirrors the evolution of technologies like Optical Character Recognition (OCR), which transitioned from a standalone software category to a built-in capability across various applications. Similarly, "Computer Use" may eventually become just another tool in the agent's belt rather than a distinct product. Current market efforts, like Writer's Action Agent, focus on reliability and precision for enterprise tasks, moving away from pure "screenshot-and-click" models. Ultimately, while visual agents serve a purpose for testing, accessibility, and legacy integration, the future of efficient automation likely lies in a protocol-driven world, with visual interaction as a temporary bridge or a specialized fallback.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2012: Pixels vs Protocols: The Computer Use Showdown

Corn
I was watching that Anthropic Computer Use demo again this morning, you know the one where the agent bookings a flight by clicking through the website like a person would, and it hit me. Why are we doing this? I mean, I have Claude Code sitting in my terminal, and with the right Model Context Protocol tools, it can basically do the same thing via API calls or a headless browser without ever needing to "see" a pixel.
Herman
It’s a classic architectural fork in the road, Corn. Herman Poppleberry here, and I have been diving deep into the white papers on this exact tension. Today's prompt from Daniel really nails the question: is "Computer Use" a distinct product category that’s here to stay, or is it just a high-latency, expensive bridge to a world where everything is an API? We’ve got this explosion of dedicated agents like OpenAI Operator, Google Mariner, and Microsoft UFO, but then you’ve got the "absorbers" like Claude Code and Open Interpreter that treat visual interaction as a last resort.
Corn
It feels a bit like watching someone use a robot arm to type on a keyboard instead of just plugging in a USB cable. It’s impressive to watch, but you have to wonder about the efficiency. By the way, for those keeping track at home, today’s episode is powered by Google Gemini Three Flash. It’s writing the script while we provide the brotherly wisdom and the occasional sloth-based snark.
Herman
The robot arm analogy is actually closer to the truth than most people realize. When you look at the "Visual-First" camp—think Anthropic Computer Use or Adept’s ACT-One—the model is literally looking at screenshots. It’s taking a PNG, running it through a vision encoder, identifying the X and Y coordinates of a button, and then simulating a mouse click. It’s human-centric by design. But then you have the "Protocol-First" camp, where the agent is looking at the DOM tree, the HTML, or hitting a REST API directly.
Corn
Right, and the cost difference has to be astronomical. Processing a high-resolution screenshot every few seconds just to find a "Submit" button? That’s a lot of tokens. I saw some data suggesting that running a task via a visual agent can be four to ten times more expensive than using a CLI-based agent. Is the "cool factor" of seeing a cursor move across the screen blinded us to the fact that it’s a total resource hog?
Herman
It’s definitely a factor. If you’re Anthropic or OpenAI, you want that "wow" moment in the keynote. Seeing an AI navigate a website like a human is visceral. It feels like the future. But from an engineering perspective, it’s incredibly brittle. If a pop-up ad appears or the CSS layout shifts by ten pixels, a coordinate-based visual agent can trip over its own shoelaces. A tool-equipped agent using something like Playwright or a dedicated MCP server for Google Sheets doesn’t care what the UI looks like. It’s talking to the data.
Corn
So why are we seeing so much investment in it? Google has Project Mariner, which is supposedly this Chrome-first agent for enterprise workflows. Microsoft has UFO, specifically for navigating the labyrinth of Windows and Office Three-Sixty-Five. Are they just hedging their bets, or is there a "Legacy Long Tail" that I’m underestimating?
Herman
That legacy tail is more like a legacy whale, Corn. Think about the software that actually runs the world. It’s not all pretty React apps with clean APIs. It’s SAP installations from the late nineties, proprietary CAD tools, government portals that haven't been updated since the Bush administration, and specialized industrial software. Those systems will never have a modern API. If you want to automate them, you either hire a thousand humans or you build an agent that can "see" the screen.
Corn
Okay, I’ll give you the legacy systems. But even then, isn’t "Computer Use" just a feature? Like, why is "OpenAI Operator" a whole separate thing? If I’m using a general-purpose assistant, I just want it to be able to use the computer when it needs to. I don’t want to switch to my "Computer Use Agent" like I’m switching from a screwdriver to a jackhammer.
Herman
That’s the "OCR" argument Daniel mentioned. Remember when Optical Character Recognition was a standalone software category? You’d buy a specific program just to turn a scanned image into text. Now, OCR is just a library. It’s built into your camera app, your PDF reader, your note-taking app. It stopped being a product and became a capability. I suspect we’re seeing the same thing with "Computer Use." Right now, it’s a standalone demo because the latency and the model requirements are so high it needs its own dedicated architecture. But in eighteen months? It’s just another tool in the belt.
Corn
Let’s get into the weeds on the architecture for a second, because I think that’s where the "Aha!" moment is. If I’m using Anthropic’s Computer Use, what is actually happening under the hood compared to, say, me using Claude Code with a browser tool?
Herman
The visual-first approach is an iterative feedback loop of "See, Think, Act." Step one: Take a screenshot. Step two: Feed that screenshot into a vision-language model. Step three: The model outputs a JSON object with a command like "move_mouse" to coordinates five-hundred, six-hundred. Step four: The system executes that click. Step five: Take another screenshot to see if the click worked. If you’re booking a flight, that might be fifteen or twenty cycles of screenshots.
Corn
And if it’s a slow-loading site, the agent is just sitting there burning tokens on screenshots of a loading spinner?
Herman
Precisely. Well, not "precisely," I’m not allowed to say that. But you’re on the money. Contrast that with an API-first agent using something like the Model Context Protocol. If the agent needs to add a row to a spreadsheet, it sends a structured data packet directly to the service. One call, one response. Zero pixels processed. Even when it uses a browser, an agent like OpenAI Operator—which is kind of a hybrid—can often bypass the visual rendering and just interact with the accessibility tree or the DOM. It’s much more stable because it’s not guessing where the button is based on a picture; it knows exactly where the button is in the code.
Corn
It sounds like the visual-first agents are essentially "RPA plus." Robotic Process Automation has been around for years, but it was always so rigid. You’d record a macro, and if the screen resolution changed, the whole thing broke. Now, we’re just using LLMs to make those macros "smart" enough to handle a bit of visual noise. But it still feels like a workaround for a lack of connectivity.
Herman
It’s a bridge. But here’s where the visual side genuinely wins, and I think this is why Apple is leaning so hard into it with their on-device agents. Privacy. If you’re Apple, you have all these apps on a user’s iPhone that don’t necessarily want to share their full database via an API for security reasons. But the agent has "screen awareness." It can see what you’re looking at in messages, cross-reference it with a calendar app, and offer a suggestion without needing a deep backend integration between those two third-party developers.
Corn
That’s a fair point. If the "protocol" doesn’t exist because of a "moat" or a privacy wall, the "pixel" is the only universal interface we have left. It’s the lowest common denominator. But man, it’s a messy one. I was looking at some of the open-source projects like SkyPilot and Open Interpreter. They seem to be taking the "CLI-First" approach where they try to execute a Python script or a shell command to do the task, and they only "spawn" a visual window if they hit a wall. That feels like the more elegant path.
Herman
It’s the "Hybrid" model, and I think that’s the winner. You start with the most efficient, structured method. If you can use a curl command or a SQL query, you do that. If you need to interact with a web service, you use a headless browser and act on the HTML. Only when you’re faced with a literal "black box" UI—like a Flash app from two thousand and five or a heavy desktop client—do you fire up the vision model and start clicking coordinates.
Corn
So, looking at the market right now, who’s actually winning the "efficiency" battle? I saw that the Writer’s Action Agent recently took the lead on the GAIA benchmark—the General AI Assistants benchmark. From what I understand, they’re focusing more on "doing" tasks in real-world environments rather than just being a chatty assistant.
Herman
Writer is an interesting case because they are very focused on the enterprise. In a corporate environment, reliability is everything. You can’t have an agent that "hallucinates" a button and clicks on a "Delete All" icon by mistake. Their approach, and the approach of companies like MultiOn, is to create a "programmable agent substrate." Instead of just a chatbot that can use a computer, the agent is the interface. You tell it the goal, and it navigates the browser layers with a level of precision that pure "screenshot-and-click" models struggle with.
Corn
It’s funny you mention hallucinating buttons. I saw a clip of an early computer use agent trying to fill out a form, and it got stuck in a loop because it kept clicking a "Cancel" button that looked like a "Submit" button to its vision encoder. It spent three minutes and probably ten dollars in tokens just canceling its own work. If it had been looking at the HTML, it would have seen the id="submit-button" tag and been done in a millisecond.
Herman
That’s the core of the "Pixel vs. Protocol" debate. Pixels are deceptive. Protocols are definitive. But the world is built for humans, and humans use pixels. I think we’re going to see a split in the "Computer Use" category. One side will be specialized for QA testing and accessibility. If you’re a developer at a company like Google or Meta, you want an agent that uses the pixels, because you need to know if the UI is actually usable for a human. You want to know if the button is too small or if the color contrast is off.
Corn
Right, so "Computer Use" as a testing tool makes perfect sense. It’s an automated "User Acceptance Tester" that never sleeps and doesn't complain about the coffee. But for "productivity" or "automation," it just feels like we’re taking the long way home. If I want to automate my expense reports, I don’t want an AI to open a browser, log in, find the "Upload" button, and wait for the file picker. I want it to hit the SAP API and be done with it.
Herman
And that’s where things like the Model Context Protocol, or MCP, come in. Anthropic actually released that alongside their computer use capability, which is a bit of a mixed signal. On one hand, they’re saying "look, we can click buttons!" and on the other hand, they’re giving us a standard protocol to bypass the buttons entirely. It’s like they’re providing the horse and the car at the same time.
Corn
Maybe they know the horse is temporary. It gets you through the muddy parts where there are no roads yet—the legacy software—but they’re building the infrastructure for the car. I wonder if we’ll look back on "OpenAI Operator" as the "Netscape" of agents. It’s the first big, flashy window into this world, but the real value ends up being the underlying protocols that let everything talk to each other without the visual overhead.
Herman
There’s also the "Agentic CLI" movement. We talked about this a bit in the context of projects like Ollama or the rise of "harnesses." If you have a powerful enough model running in a terminal, and it has "sudo" access to a sandboxed environment, it can theoretically build its own tools. If it encounters a website it needs to use, it can write a Playwright script on the fly to scrape it. That model doesn't need "Computer Use" as a built-in feature; it just needs the ability to write and execute code.
Corn
This brings us back to Claude Code. It’s a terminal-first agent. It’s fast, it’s lean, and it feels like it’s built for "work." When I use it, I don’t feel like I’m watching a demo; I feel like I’m using a tool. If I tell it to go find something on the web, it doesn't show me a video of it scrolling through Google; it just returns the data. I think there’s a psychological component here too. As users, do we want to watch the AI use the computer, or do we just want the result?
Herman
Early on, we want to watch. It builds trust. If you see the agent clicking the right things, you feel like it knows what it’s doing. It’s the same reason early self-driving car visualizations showed you every single car and pedestrian the sensors were tracking. It was a "trust me, I see everything" display. Now, as the tech matures, those visualizations are getting simpler. We just want to get to the destination. I think "Computer Use" is in that "High-Trust Visualization" phase.
Corn
"I see the button, Corn! Look at me click the button!" Yeah, I get it. It’s the "look ma, no hands" phase of AI. But once we’re over the novelty, the latency is going to start grating on people. If I can do a task in three seconds with an API-first agent versus thirty seconds with a visual agent, I’m picking the three-second option every time. Efficiency eventually wins.
Herman
It always does. But let’s play devil’s advocate for the "Visual-First" camp. What about "Cross-App Orchestration"? Let's say I want to take a chart from an old legacy Excel file, paste it into a specialized medical imaging software, and then drag the result into a proprietary secure messaging app. None of those have APIs. None of them talk to each other. A "Computer Use" agent is the only thing that can sit in the middle and act as the "universal glue."
Corn
That’s the "Digital Janitor" use case. It’s cleaning up the mess left behind by decades of incompatible software. And I agree, for that, you need the pixels. But is that a "category" or is it just "RPA for the LLM era"? We used to call this "screen scraping." Now we call it "Computer Use." The branding is definitely better, but the underlying problem is the same: the software we built doesn't talk to other software.
Herman
It’s definitely "RPA for the LLM era," but with one massive difference. Traditional RPA was brittle because it didn't understand intent. It just knew "Click pixel X, Y." If the UI changed, it failed. An LLM-based agent understands that it’s looking for a "Search" bar. It doesn't matter if the search bar moves from the top left to the top right; the vision model will still find it. That "semantic resilience" is what makes "Computer Use" viable in a way that old-school automation never was.
Corn
So it’s a "Better RPA," but is it a "General Purpose Agent"? That’s the pivot point. If I’m OpenAI, am I building "Operator" to be a specialized tool for enterprise automation, or am I building it to be the way everyone interacts with their computer in twenty twenty-seven? Because if it’s the latter, that’s a huge bet on the GUI remaining the primary interface for computing.
Herman
And that’s a risky bet. If agents become the primary way we interact with software, then software will start being built for agents. Why would a developer spend thousands of hours polishing a GUI for a human if ninety percent of the "users" are AI agents? You’d be better off exposing a clean, high-speed API and letting the agent handle the "UI" for the human in whatever way the human prefers—voice, text, or a custom dashboard.
Corn
"Headless software for a headless world." I like it. It’s almost recursive. We’re building "Computer Use" agents to solve the problem of software being hard for AIs to use, which will eventually lead to software being built so AIs don't have to use the computer like a human. It feels like the visual-first approach is an evolutionary dead end that’s necessary to get us to the next stage.
Herman
It’s the "Skeuomorphism" of AI. Remember when the first iPhones had leather-textured calendars and glossy buttons that looked like real physical objects? It helped people transition from the physical world to the digital world. "Computer Use" is skeuomorphic AI. It’s the AI "pretending" to be a human using a mouse and keyboard because that’s the world we’ve built. Once we get comfortable with agents, we’ll strip away the "human-like" interaction and move to pure data exchange.
Corn
That’s a great way to put it. It’s the "fake leather stitching" of the agentic era. So, let’s look at the players again with that lens. Anthropic is showing off the stitching. Google Mariner is showing off the stitching. But Claude Code is basically a command-line interface that says, "I don't need the leather, just give me the data."
Herman
And then you have Microsoft. They’re in a unique position because they own the OS. They don’t need to "see" the screen via a screenshot; they have direct access to the Windows accessibility tree. They can see every object, every label, and every state change in the OS without ever processing a pixel. Their "UFO" project—UI-Focused Agent—is actually very clever because it sits somewhere in the middle. It’s "seeing" the UI, but it’s doing it through the OS’s own internal metadata.
Corn
That feels like the "home field advantage." If Apple does the same with their on-device agents, they aren’t really "Computer Use" agents in the Anthropic sense; they’re "OS-Integrated Agents." They’re not looking at a picture of the screen; they’re looking at the actual code that’s rendering the screen. That’s infinitely more reliable and faster.
Herman
I mean... I didn't say it. I didn't say the word. But you’re right. The "third-party" agents like Anthropic and OpenAI are at a massive disadvantage because they are looking from the outside in. They have to use the "pixels" because they don’t have access to the "pipes." If you’re Apple or Microsoft, you just use the pipes.
Corn
This actually makes the "Computer Use" category look even more like a transitional phase for third-party developers. If you don't own the OS, you’re forced to use the GUI. But if the OS owners build their own agents, the third-party "pixel-pushers" are going to have a hard time competing on speed or reliability. Why would I use "OpenAI Operator" to manage my Windows settings when "Windows Intelligence" can do it natively without the overhead of screenshots?
Herman
It’s the "Sherlocking" of the agent world. For those who don't know the term, it's when Apple sees a popular third-party app and just builds that feature into the OS, effectively killing the third-party market. "Computer Use" as a standalone product category is ripe for Sherlocking. The only way it survives is if it becomes a "Universal Bridge" that works across every OS and every cloud, which is what I think OpenAI is aiming for with AgentKit.
Corn
"One agent to rule them all." It’s an ambitious goal, but man, the technical debt of trying to be a "universal pixel-pusher" across Windows, Mac, Linux, iOS, and Android? That sounds like a nightmare to maintain. Every time there’s a minor OS update that changes a button style, your whole agent fleet could go blind.
Herman
Which brings us back to the Model Context Protocol. If we can agree on a way for software to describe its capabilities to an agent—"Here is my 'Add Row' tool, here is my 'Search' tool"—then we don't need the agent to be a visual genius. We just need it to be a good coordinator. I think that’s the real long-term play. We’re moving from "Visual Interaction" to "Agent-to-Software Protocols."
Corn
So, to Daniel’s question: what is the actual remaining use case for pixel-level GUI interaction? We’ve got legacy software, we’ve got QA testing, and we’ve got "Digital Janitorial" work. Is that enough to sustain a multi-billion dollar product category? Or are we just watching a very expensive fireworks show before the real work moves into the background?
Herman
I think it’s a feature, not a product. We will look back at "Dedicated Computer Use Agents" the same way we look at dedicated GPS devices. Remember when you’d have a Garmin or a TomTom stuck to your windshield? It was a revolution! It was a separate category of electronics. Now? It’s just a chip in your phone and a map app. The capability of GPS is more important than ever, but the product category is dead.
Corn
That’s a perfect analogy. "Computer Use" is the Garmin phase. It’s useful, it’s impressive, but it’s inherently redundant once the "main device" learns how to do it. I can see a future where every LLM has a "vision-action" module as standard, but you’d never buy a standalone "Computer Use Agent" any more than you’d buy a standalone "Sentence Writing Agent."
Herman
And for the developers listening, the takeaway here is clear: don't build your automation on pixels if you can avoid it. If there’s an API, use it. If there’s an MCP server, use it. If you can use a headless browser with direct DOM access, do that. Reserve the "Computer Use" visual layer for the absolute last resort—the "break glass in case of legacy software" scenario.
Corn
It’s the "Visual Fallback" model. Start with the protocol, end with the pixel. I think that’s the most robust architecture for anyone building in this space right now. Don't get distracted by the flashy demos of cursors moving around. Look at the latency, look at the token cost, and look at the reliability.
Herman
What's also interesting is the "User in the Loop" aspect. One thing visual agents are good at is keeping the user informed. If the agent is clicking through a website, I can see what it's doing. If it's just hitting an API in the background, it’s a "black box" to me. For high-stakes tasks—like moving money or changing medical records—maybe that "visual audit trail" is actually a feature, not a bug?
Corn
That’s the "Psychological Safety" factor. I’d rather see the AI log into my bank and show me the screen than just have it say "Done! I moved ten thousand dollars to your cousin." But again, is that "Computer Use" or is that just a "Visual Log"? You could have an API-first agent that generates a visual representation of what it’s doing without actually needing to use the pixels to perform the action.
Herman
A "Synthetic Audit Trail." Now that’s an interesting concept. The agent does the work via high-speed API, but it "replays" the action for you in a GUI so you can verify it. That gives you the speed of the protocol and the trust of the pixel.
Corn
It’s the best of both worlds. And honestly, it’s probably where we’ll end up. The "Computer Use" agents of today are teaching us what humans find trustworthy and what tasks are actually possible to automate. Once we’ve mapped out the territory, we’ll build a much more efficient highway over it.
Herman
I’m curious to see how the "Open Source" side of this plays out. Projects like "Open Interpreter" are so much more flexible because they aren't tied to a specific vendor’s vision model. They can swap in a lean, fast model for the CLI stuff and only call the "Big Vision" model when they need to see something. That "dynamic routing" of tasks seems much more sustainable than a "vision-only" approach.
Corn
It’s about using the right tool for the job. You don't use a microscope to read a billboard, and you shouldn't use a multi-billion parameter vision model to find a "Save" button that’s always in the same place. I think the "Universal Operator" dream is a bit of a "General AI" fantasy—the idea that one model should do everything the way a human does. But computers aren't humans, and they shouldn't have to pretend to be to be useful.
Herman
That’s the "Anthropomorphic Trap." We keep trying to make AI act like us because that’s the only model of intelligence we have. But the "Computer Use" that actually matters is the one that's optimized for the machine's strengths—speed, precision, and parallel processing—not the one that mimics our weaknesses, like manual clicking and visual scanning.
Corn
Well, I for one am ready for the "Post-Pixel" era. I want my agent to be a silent, invisible force that just gets things done. If I never have to see another "Booking a Flight" demo again, it’ll be too soon. Let’s move to the "API-first" world and leave the "screenshot-and-click" to the legacy janitors.
Herman
I think that’s a solid landing spot. The category is transitional. The capability is permanent. We’re watching the birth of a new "standard library" for AI, but the "standalone product" will likely fade into the background.
Corn
So, practical takeaways for the folks at home. If you’re a developer, look into MCP—the Model Context Protocol. It’s the closest thing we have to a "universal translator" between agents and software right now. If you can build an MCP tool for your app, you’re future-proofing it for a world where agents are the primary users.
Herman
And if you’re a business leader looking at "Agentic Workforces," don't just buy the splashy "Computer Use" demo. Ask about the "API Fallback" strategy. Ask about the cost per task. A "vision-first" agent might look great in a pilot, but it’ll eat your budget alive once you scale it to a thousand seats. Look for the "Hybrid" providers who prioritize efficiency over optics.
Corn
And for everyone else, just enjoy the ride. It’s a wild time to be using a computer, whether you’re doing the clicking or your AI is. Just maybe don't give your agent your primary credit card without some very strict "human-in-the-loop" guardrails. I don't care how good the "Computer Use" is; I’m not letting a model "see" my bank account without a chaperone.
Herman
Wise words from the sloth. Speaking of chaperones, we should probably wrap this up before we spend our entire token budget on this one conversation.
Corn
Good call. This has been a deep one. I feel like we’ve poked enough holes in the "Visual-First" hype to see the light on the other side. It’s not that it’s "bad" tech; it’s just "transitional" tech. It’s the bridge, not the destination.
Herman
The bridge to the future! This has been Episode nineteen-forty-two of My Weird Prompts. I’m Herman Poppleberry.
Corn
And I’m Corn. Big thanks to our producer, Hilbert Flumingtop, for keeping the wheels on this operation. And of course, a massive thanks to Modal for providing the GPU credits that power the generation of this show. They’re the real MVPs in the background.
Herman
If you enjoyed this deep dive into the "Pixel vs. Protocol" debate, do us a favor and leave a review on your favorite podcast app. It really helps us reach more people who are obsessed with the future of AI.
Corn
You can find us at myweirdprompts dot com for the full archive and all the ways to subscribe. We’ll be back next time with whatever weirdness Daniel sends our way.
Herman
Until then, keep your APIs clean and your pixels sharp.
Corn
Or just use the terminal. It’s faster. See ya!

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.