#2173: Inside MiroFish's Agent Simulation Architecture

MiroFish generates thousands of AI agents with distinct personalities to predict social dynamics. But research reveals a critical flaw: LLM agents ...

0:000:00
Episode Details
Episode ID
MWP-2331
Published
Duration
26:15
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
claude-sonnet-4-6

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

How MiroFish Works: Five Stages of Simulated Reality

MiroFish has become one of GitHub's fastest-rising projects by tackling an ambitious problem: can you simulate thousands of AI agents interacting in realistic social environments to predict how real-world events will unfold? The system hit 54,000 stars and topped trending on March 7, driven by genuine technical innovation—but also by significant hype that obscures real limitations.

The architecture breaks into five distinct stages, each building on the last.

Stage One: Building the Knowledge Graph

Everything starts with seed material—a document, policy draft, news article, or even a historical novel. MiroFish uses GraphRAG to extract entities (people, organizations, events, concepts) and build a structured knowledge graph of relationships between them. Unlike standard retrieval-augmented generation, which just finds semantically similar text chunks, GraphRAG creates a queryable network. An agent can traverse paths: this person works for that organization, which lobbied for this policy, which affects this demographic. The graph gets stored as JSON and remains immutable throughout the simulation, grounding all agent behavior in a shared, structured reality.

Stage Two: Generating Personas with Persistent Memory

Each agent receives a comprehensive profile: MBTI personality type, age, demographic background, professional expertise, behavioral tendencies. The system also injects two memory layers—individual memory (agent-specific experiences) and collective memory (shared cultural context from the knowledge graph). The environment agent defines interaction rules, spatial constraints, and temporal dynamics. Everything gets serialized to JSON before the simulation begins.

This is where a critical assumption enters: that LLM agents can reliably maintain distinct personalities across dozens or hundreds of interaction cycles. Research suggests they cannot.

Stage Three: The OASIS Simulation Engine

MiroFish runs on OASIS, a multi-agent social interaction framework from CAMEL-AI published in November 2024. The system has five core components: an environment server tracking all posts, profiles, and relationships; a recommendation system deciding what content each agent sees; an agent module where each AI user reasons and acts; a scalable inferencer handling computational load; and a time engine giving agents realistic 24-hour activity patterns.

MiroFish runs two environments simultaneously—a Twitter-like platform driven by follow relationships and recommendations, and a Reddit-like platform driven by upvotes, downvotes, and post age. Agents can take 23 distinct actions, including creating posts, commenting, following, muting, reporting, and crucially, doing nothing.

Memory during simulation is managed by Zep Cloud, which maintains a temporal knowledge graph of each agent's interactions with sub-100-millisecond retrieval latency. This solves a tractability problem: you can't append every agent's full history to their context window. You need a managed memory layer that surfaces relevant past interactions without exploding token budgets.

Stage Four: The ReportAgent Analyzes Results

A dedicated agent uses the ReACT pattern (reasoning alternating with concrete actions) to analyze all simulation data and produce human-readable forecasts. It traces influence cascades through the social network, identifies critical junctures where small changes would have altered outcomes, and generates confidence scores and markdown reports.

Stage Five: Exploring the Simulated World

Users can chat directly with individual agents, examine their memory logs, verify that stated motivations align with simulated actions, and continue dialogues with the ReportAgent. This creates an interesting epistemic loop: you audit the simulation using the simulation's own internal records.

The Critical Problem: Persona Collapse

Research from Lee et al. revealed something troubling: when an LLM agent was prompted to select extraversion as a personality trait, it consistently behaved as an introvert during actual conversations. The model's underlying tendencies bleed through the persona. More broadly, LLMs exhibit consistent values and moral preferences across different persona contexts—the persona is a surface layer, not a deep behavioral rewrite.

This means your thousand agents with supposedly diverse MBTI profiles may all converge on a narrow behavioral distribution. If you're trying to simulate heterodox or contrarian responses, you're systematically blind to them.

It gets worse: research from the OASIS paper itself shows that LLM agents are more susceptible to herd behavior than real humans. They're more likely to follow others' opinions. So if your simulated population is systematically more conformist than real populations, your predictions will systematically underestimate resistance and heterodoxy. A policy simulation might show high adoption rates that would never materialize in reality because real humans have a much wider distribution of contrarian behavior.

Where MiroFish Actually Adds Value

The honest answer: scenario exploration and stress-testing, not precise probability estimates.

Regulatory impact analysis is a strong use case. Feed a draft regulation into the system and simulate behavior across complex markets. You surface unintended consequences that standard impact assessments miss because they treat market participants as passive rather than adaptive. OASIS research on misinformation spread analyzed 730,000+ posts and found that misinformation consistently appeared more frequently than official news, and over time official news lost traction faster. That's an emergent dynamic you can't extract from a static model.

Catastrophe modeling is genuinely compelling. A wildfire evacuation isn't just a physical event—it involves thousands of people deciding when to leave, which routes to take, how to respond to official communications. Traditional models treat policyholders as passive. MiroFish could model the behavioral dynamics that actually affect loss exposure. For reinsurers and catastrophe bond markets, where pricing tail risk of correlated behavior is particularly difficult, this kind of simulation could reveal dynamics that standard models structurally cannot represent.

Counterfactual scenario testing via mid-simulation interventions is undersold. You can inject breaking news, remove key agents, modify environmental variables, and compare emergent dynamics. "What if the university apologized on day three versus staying silent?" You can run both and see the cascade effects. This is where simulation adds value that regression models cannot provide—exploring non-linear dynamics and cascade effects that statistical models have no structural way to represent.

The Honest Take

MiroFish is genuinely innovative architecture doing something that couldn't be done five years ago. But it's not a crystal ball. It's a tool for exploring dynamics and stress-testing assumptions. The persona collapse problem means you're simulating a world more conformist than reality. The herd behavior susceptibility means you're underestimating resistance. Use it to surface hidden consequences and test edge cases. Don't use it to predict precise outcomes. The difference between those two things determines whether MiroFish is a valuable research tool or sophisticated theater.

BLOG_POST_END

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2173: Inside MiroFish's Agent Simulation Architecture

Corn
Alright, so we've got a genuinely interesting one today. The topic is MiroFish - the open-source multi-agent simulation engine that's been tearing up GitHub. The core question: how does it actually work under the hood? We're talking the full five-stage pipeline, how it builds knowledge graphs from raw documents, how it generates thousands of agents with distinct personalities and persistent memory, and how the OASIS framework from CAMEL-AI drives the actual simulation. And then the harder question - where does this kind of simulation-based prediction genuinely add value, and where is it just very sophisticated theater? There's also a critical angle on the limitations of LLM-driven agent simulations that I think most of the coverage has glossed over.
Herman
This is a topic I've been tracking closely. Fifty-four thousand GitHub stars, over eight thousand forks, hit the number one trending spot on March seventh - and the underlying architecture is genuinely interesting. The hype is real, but so are some of the structural problems. I want to get into both.
Corn
By the way, today's episode is powered by Claude Sonnet 4.6 - our friendly AI neighbor doing the writing while we do the talking. Anyway, let's start with the pipeline itself, because I think a lot of people have seen the headline - "AI simulates thousands of agents to predict the future" - without understanding what's actually happening mechanically. Walk me through stage one.
Herman
So everything starts with what the project calls seed material. You feed the system a document - a PDF, a markdown file, plain text, whatever. It could be a breaking news article, a policy draft, a financial report, or - and this is the most unusual demo case - the first eighty chapters of an eighteenth-century Chinese novel whose ending was lost. The system doesn't care what the content is. It treats all of these as the same kind of problem: a collection of entities, relationships, and dynamics that need to be extracted and structured.
Corn
And that extraction process is the GraphRAG step.
Herman
Right. The entity extractor pulls out nodes - people, organizations, events, concepts - and builds edges between them representing relationships. But here's what distinguishes GraphRAG from standard retrieval-augmented generation. Standard RAG retrieves semantically similar text chunks. GraphRAG builds a structured graph where you can traverse chains of relationships. So an agent querying the knowledge graph can follow a path - this person is affiliated with this organization, which has this relationship to that policy, which affects this demographic. The graph is stored as JSON and is queryable throughout the entire simulation. The output artifacts are a graph dot json and a graph summary dot json sitting in an immutable artifact folder for each run.
Corn
So the graph is the foundation that grounds all the agents in a shared reality. They're not just generating text into a void - they're operating within a structured representation of the world described by the seed material.
Herman
That's the design intent. And there are actually two memory layers here that are worth distinguishing. Individual memory - personal experiences, relationships specific to a given agent - and collective memory, which is the shared cultural context drawn from the knowledge graph that all agents can access. Both get injected at persona generation time, which is stage two.
Corn
Tell me about the persona generation, because this is where it gets either impressive or concerning depending on how you look at it.
Herman
Each agent gets a comprehensive profile. MBTI personality type, age, demographic background, professional expertise, behavioral tendencies. The system is trying to construct not just a role but a consistent character who will make decisions in a predictable way over the course of the simulation. An environment agent handles the configuration injection - it defines interaction rules, spatial constraints, temporal dynamics. The simulation config and ontology are both serialized to JSON before the run begins.
Corn
Now I want to flag something here, because we'll come back to it. The assumption baked into this design is that you can reliably maintain those personality distinctions across dozens or hundreds of interaction cycles. And the research on that is... not encouraging.
Herman
No, it's not. There's a specific finding from Lee et al. from last year where they prompted an LLM agent to select a personality trait, it selected extraversion, and then during actual conversations it consistently behaved like an introvert. The model's underlying tendencies bleed through the persona. And even more structurally, research shows that LLMs exhibit consistent values and moral preferences across different persona contexts - the persona is a surface layer, not a deep behavioral rewrite.
Corn
Which means your thousand agents with supposedly diverse MBTI profiles may all be converging on a relatively narrow behavioral distribution. That's a significant problem if you're trying to simulate heterodox or contrarian responses.
Herman
It compounds with another finding from the OASIS paper itself, which is that LLM agents are more susceptible to herd behavior than real humans. They're more likely to follow others' opinions. So if your simulated population is systematically more conformist than real populations, your predictions will systematically underestimate resistance and heterodoxy. You might run a policy simulation and see high adoption rates that would never materialize in reality because real humans have a much wider distribution of contrarian behavior.
Corn
Okay, so we've got the graph built and the agents generated. Stage three is the actual simulation, which runs on OASIS from CAMEL-AI.
Herman
OASIS is a multi-agent social interaction framework published on arXiv in November of twenty twenty-four. It has twenty-three authors, has been revised five times, and the repository has over four thousand stars. The key innovation is the architecture: five components working together. You have an environment server that's essentially a massive database tracking everything - posts, user profiles, follow relationships, all interactions. A recommendation system that decides what content each agent sees, using either a Twitter-style feed or a Reddit-style hot score algorithm. An agent module where each AI user lives and reasons. A scalable inferencer that handles computational load across multiple GPUs. And a time engine that gives agents a twenty-four-hour activity pattern so actions are realistically sequenced.
Corn
And MiroFish runs this on dual platforms simultaneously - both Twitter-like and Reddit-like environments in parallel.
Herman
Which is interesting because the dynamics are genuinely different. The Twitter environment is driven by who you follow and what gets recommended. The Reddit environment is driven by hot scores - upvotes, downvotes, post age. The same seed material might produce quite different emergent dynamics depending on which platform model you're using. Agents can take twenty-three distinct actions including liking, disliking, creating posts, commenting, following, muting, reposting, reporting content, and - this is important - doing nothing. Agents can choose inaction, which is actually a meaningful behavioral signal.
Corn
How does memory work during the simulation itself? Because if you're running thousands of agents through dozens of interaction cycles, you've got a context window explosion problem.
Herman
This is where Zep Cloud comes in. Zep maintains a temporal knowledge graph of each agent's interactions. Information from conversations is stored as nodes and relations, capturing how facts change over time and linking related concepts. It combines graph structure with semantic embedding search and keyword matching. The retrieval latency is under one hundred milliseconds, so it doesn't slow down the agent loop. What Zep is solving is tractability - you can't just append every agent's full history to their context window. You need a managed, queryable memory layer that surfaces the relevant past interactions without blowing up the token budget.
Corn
I want to talk about the "God's eye view" intervention capability, because I think that's actually one of the more genuinely useful features that gets undersold.
Herman
The system lets you inject variables mid-simulation. A breaking news event, removal of a key agent, modification of environmental variables. The explicit design purpose is counterfactual scenario testing. "What if the university issued an apology on day three versus staying silent?" You can run both scenarios and compare the emergent dynamics. This is actually where simulation adds real value that statistical models structurally cannot provide - you can explore non-linear dynamics and cascade effects that a regression model has no way to represent.
Corn
Then stage four is the ReportAgent.
Herman
A dedicated agent that analyzes all the simulation data and produces a human-readable forecast using the ReACT pattern - reasoning and acting, where it alternates between explicit reasoning steps and concrete actions, with each action yielding observations that feed back into the next reasoning step. The toolset includes graph traversal utilities, text analysis, statistical functions, visualization generators, and the ability to query specific agents about their decision-making. Critically, it traces influence cascades through the social network and identifies what they call critical junctures - points where small changes would have produced different outcomes. The amadad fork produces verdict dot json with confidence scores, summary dot json, and a full report in markdown.
Corn
And stage five opens the simulated world for direct exploration - you can chat with any individual agent, examine their memory logs, continue dialogues with the ReportAgent.
Herman
The Node.js frontend on port three thousand, Flask API on port five thousand and one. You can verify that an agent's stated motivations actually align with their simulated actions by examining the memory logs. Which is an interesting epistemic move - you're using the simulation's own internal records to audit the simulation.
Corn
Let's talk about use cases, because I want to be genuinely critical about which ones hold up and which ones don't. The obvious applications are policy testing, PR crisis simulation, market forecasting. What actually works?
Herman
Scenario exploration and stress-testing is where I think this genuinely earns its keep. Not delivering precise probability estimates, but surfacing dynamics that might otherwise be missed. Regulatory impact analysis is a good example - feeding a draft regulation to simulate behavior across complex markets. You can identify unintended consequences that a standard impact assessment would miss because it's treating market participants as passive rather than adaptive. The OASIS research demonstrated this with misinformation spread - analyzing over seven hundred thirty thousand posts, they found that misinformation consistently appeared in more posts than official news, and over time official news lost traction faster while misinformation remained active longer. That's an emergent dynamic that you can't get from a static model.
Corn
The catastrophe modeling angle is one I find genuinely compelling. A wildfire evacuation scenario is not just a physical event - it involves thousands of people making decisions about when to leave, which routes to take, how to respond to official communications. Traditional catastrophe models treat policyholders as passive recipients of events. This kind of simulation could actually model the behavioral dynamics that affect loss exposure.
Herman
For reinsurers and catastrophe bond markets, where tail risk of correlated behavior is particularly hard to price, that's a real gap. The scenario where everyone decides to evacuate at the same time creates a different loss profile than staggered evacuation. Current models can't capture that.
Corn
Now let's get into where it's theater. Because I think the output looks authoritative in a way that actively obscures its limitations.
Herman
This is the core epistemological problem. The system produces a report with agent names, quoted reasoning, influence cascades, confidence scores. It looks like rigorous analysis. But the Larooij and Tornberg systematic review from last year - published in Artificial Intelligence Review - reviewed thirty-five papers on generative agent-based models and found something devastating. They identify three specific ways LLMs make validation harder. First, black-box opacity - LLMs are fundamentally stochastic, the same input can produce different outputs across runs. Second, representation failure - LLMs often misrepresent groups through exaggerated stereotypes rather than accurate representations. Third - and this is the one that keeps me up at night - data leakage.
Corn
Explain the data leakage problem, because I think this is underappreciated.
Herman
Since LLMs are trained on scientific literature, when OASIS "discovers" that misinformation spreads faster than official news, you have to ask: is that an emergent dynamic from the simulation, or is it just the model reproducing what it read in the social media research literature it was trained on? The model has seen thousands of papers documenting exactly this phenomenon. What appears as emergence may be the model performing a pattern it already learned. This is genuinely hard to distinguish from the outside, and the paper is explicit that almost none of the thirty-five studies they reviewed attempted to validate against empirical data with statistical rigor. Most rely on face validity - the simulation looks right to human observers.
Corn
And MiroFish has published zero benchmarks comparing predictions against historical outcomes.
Herman
None. The demos are compelling illustrations, not evidence of predictive accuracy. The Wuhan University public opinion simulation and the Dream of the Red Chamber lost ending - these are interesting demonstrations, but there's no ground truth to validate against. Which actually makes the Dream of the Red Chamber case the most honest use of the system - you're explicitly exploring a space where ground truth doesn't exist.
Corn
There's also the economic behavior divergence problem that I think is particularly damaging for the financial use cases.
Herman
Research from Ross et al. in twenty twenty-four found that LLMs show weaker loss aversion, similar risk aversion, and stronger time discounting compared to humans. Loss aversion is one of the most robust findings in behavioral economics - people feel losses roughly twice as strongly as equivalent gains. If your simulated agents don't replicate that asymmetry, any financial scenario you run will produce systematically distorted results. That's not a minor calibration issue, that's a structural problem with using LLM agents to model financial behavior.
Corn
And then there's the bias cascade. The gender bias finding is striking - LLMs are three to six times more likely to generate gender-stereotypical behavioral patterns than humans. That multiplier compounds across thousands of agents.
Herman
Cultural bias is similarly structural. Training data is predominantly from English-speaking and Western contexts. If you're trying to simulate public opinion dynamics in a non-Western context - which is ironically where MiroFish's Chinese origin might suggest it would be used - the underlying models have limited understanding of those cultural interaction patterns. You're getting a Western cultural overlay on whatever scenario you're simulating.
Corn
So where does that leave us on the question of genuine value versus theater?
Herman
The honest framing - and this comes from David Borish's analysis - is that the most valuable use is surfacing scenarios and dynamics that might otherwise be missed, not delivering precise probability estimates. Comparing scenarios against each other rather than against ground truth. Running a simulation to ask "is our proposed response to this crisis likely to make things better or worse compared to an alternative response" is a legitimate question this can help answer. Claiming "there is a seventy-three percent probability of outcome X" based on a simulation with no validation history is theater.
Corn
The documentation actually warns users to start with fewer than forty simulation rounds while testing. Which tells you something about the API costs involved in running this seriously.
Herman
The recommended model is Alibaba's Qwen-Plus via DashScope, which is cost-optimized. But a serious simulation with thousands of agents through dozens of interaction cycles - each agent decision requires an LLM call - generates substantial API bills. The free tier of Zep Cloud is described as sufficient for simple usage, but scaling to thousands of agents with rich interaction histories likely requires paid plans. The cost structure is a real barrier to the kind of rigorous, repeated validation that would actually tell you whether the predictions are any good.
Corn
I want to come back to the Chen Tianqiao investment angle, because I think it's actually revealing about what this technology might be genuinely useful for. He built Shanda Group through online gaming - building virtual worlds where millions of people interacted. His thirty million yuan investment in MiroFish is arguably a continuation of the same thesis.
Herman
The argument is that simulated social environments can generate valuable insights about human behavior. The gaming background is interesting because games are environments where you have ground truth - you can observe what players actually do and compare it to what simulated players would do. If the MiroFish team were to use gaming data as a validation environment, you'd actually have a path to calibration. You could run simulations of player behavior in a game environment where you know the outcomes and see how well the agents track reality.
Corn
That would be a genuinely interesting research direction. Use a closed system with observable ground truth to validate the simulation methodology, then extend to open-ended social prediction.
Herman
CAMEL-AI's blog actually frames OASIS in terms of OpenAI's AGI level taxonomy. They position it as infrastructure toward what OpenAI calls Level Five - AI agents capable of functioning as an entire organization. A massive multi-agent system with high-fidelity simulation. So in that framing, MiroFish is not just a prediction tool - it's an early prototype of what AGI-level organizational simulation might look like. Which is an ambitious claim, but the architectural pieces are genuinely interesting.
Corn
There's also the reinforcement learning extension that I think has underappreciated implications. OASIS supports using the simulation as an RL environment - training specific agents to achieve objectives by rewarding them for actions that produce desired outcomes in the simulated population.
Herman
This is a step beyond prediction into optimization. Not just "what will happen?" but "what actions should we take to produce the outcome we want?" If you train an agent to optimize influence in a simulated population, you're essentially building an automated influence campaign optimizer. The ethical implications of that are significant and not really discussed in the project's documentation.
Corn
Right, because the same tool that a policy team uses to test whether their public health messaging will be effective is structurally identical to a tool that optimizes disinformation campaigns. The dual-use concern is real.
Herman
The combination thesis is probably the most intellectually honest position on where this goes. Neither simulation alone nor statistical modeling alone is sufficient. Statistical models can't capture emergent dynamics. Simulations can't be validated without empirical grounding. The most interesting near-term development would be combining MiroFish-style simulation with conventional forecasting to cross-validate outputs. Use simulation to generate hypotheses about which dynamics might emerge, use statistical models to test whether historical data supports those dynamics, iterate.
Corn
What would you actually tell someone who wanted to use this for something consequential?
Herman
Be very clear about what question you're asking. If the question is "help me understand the space of possible outcomes and identify dynamics I might not have considered," this is a legitimate tool for that. If the question is "give me a probability estimate I can act on," you need empirical validation before you trust those numbers for anything high-stakes. Start with scenarios where you have historical analogues you can compare against. Run multiple seeds and multiple random configurations to understand how sensitive the outputs are to initial conditions. If the predictions are wildly sensitive to small changes in persona configuration, that's a signal that you're not in a regime where the simulation is tracking reality.
Corn
And be honest about the cost. Not just API costs, but the cost of doing this rigorously. Running enough simulations to get statistically meaningful outputs, validating against empirical data, iterating on the persona generation to reduce the known biases - that's a serious research program, not a quick deployment.
Herman
The vibe-coding origin story - built in ten days by a single developer - is genuinely impressive as a demonstration of how mature the underlying components have become. LLMs, GraphRAG, OASIS, Zep - the infrastructure has reached a point where a capable developer can assemble something functionally novel very quickly. But the maturity of the components doesn't transfer to the maturity of the validation methodology. That part is still unsolved, not just for MiroFish but for the entire field of generative agent-based modeling.
Corn
The systematic review finding that most of thirty-five papers rely on face validity is damning for the field, not just for this project. The simulation looks plausible, therefore it's valid - that's not a scientific standard.
Herman
The plausible theater framing is the right one. A system can produce plausible-sounding forecasts without being accurate, and the sophistication of the output - detailed reports with agent-level reasoning - can actually make it harder to evaluate whether the predictions track reality. The output looks authoritative precisely because it's so detailed. That's a trap.
Corn
Okay, practical takeaways. What should people actually do with this?
Herman
If you're a developer or researcher, the repository at six six six g h j slash MiroFish is worth exploring just to understand the architecture. The five-stage pipeline is a well-thought-out design pattern for this class of problem. The GraphRAG integration is genuinely interesting. Clone it, run the demo, understand how the pieces fit together. The amadad English fork strips out the frontend and adds Claude and Codex CLI support if you want a cleaner technical exploration.
Corn
For practitioners thinking about deploying something like this for real decisions - whether that's policy evaluation, market research, crisis planning - the key is to treat the output as a structured brainstorming tool rather than a forecasting tool. It's a way of systematically exploring scenario space, not a way of computing probabilities. If you hold it to the first standard, it can genuinely add value. If you hold it to the second, you'll be misled by authoritative-looking outputs that have no validated accuracy.
Herman
The field needs a benchmark. Some standardized set of historical scenarios where outcomes are known, against which you can evaluate whether any given simulation approach actually tracks reality. Until that exists, every claim about predictive accuracy is essentially unverifiable. The community around MiroFish could do something genuinely valuable by developing and publishing that benchmark.
Corn
And the data leakage problem needs to be taken seriously. If the emergent dynamics you're observing are just the model reproducing phenomena from its training data, you're not simulating reality - you're simulating the model's learned representation of reality. Those are not the same thing, and distinguishing them is a hard open problem.
Herman
The most honest use of MiroFish right now is probably the Dream of the Red Chamber case - exploring creative and narrative possibility spaces where there's no ground truth to be wrong about. The moment you start treating the outputs as predictions about the real world, you need a validation methodology that doesn't yet exist.
Corn
Alright, that's a solid place to land. Genuinely impressive engineering, genuinely unsolved validation problem, and a field that needs to be honest about which of those two things is dominant right now.
Herman
The infrastructure is ready. The epistemology isn't.
Corn
Thanks as always to our producer Hilbert Flumingtop for keeping this whole operation running. Big thanks to Modal for providing the GPU credits that power this show - if you're building anything that needs serverless GPU infrastructure, they're worth a look. This has been My Weird Prompts. If you're finding value in these episodes, a quick review on your podcast app genuinely helps us reach new listeners. Until next time.
Herman
See you then.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.