#2506: Squashing Database Migrations Without Breaking Production

How to safely squash old migrations, cut deploy times, and generate schema documentation at version boundaries.

0:000:00
Episode Details
Episode ID
MWP-2664
Published
Duration
29:57
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

If you've ever opened a migrations folder and felt like you're on an archaeological dig through years of schema decisions, you're not alone. The question at the heart of this episode is simple: why do we accept sixty migration files as normal, and how can we safely compact them?

The Core Technique

The mechanics of squashing are surprisingly straightforward and tool-agnostic. The five-step process works across Flyway, Liquibase, Prisma, and most other migration tools:

  1. Apply all existing migrations to a fresh database instance (Testcontainers works well)
  2. Dump the resulting schema as raw SQL
  3. Create a single baseline migration file from that dump
  4. Update the migration history table to mark all prior migrations as already applied
  5. Delete the old migration files

Why Production Is Safe

The key insight that makes this work is how baseline files behave differently on fresh vs. existing databases. Using Flyway's convention as an example: a baseline file (prefixed with B instead of V) only runs on empty databases. On an existing production database that already has a schema history table, Flyway sees it's already at or above that version and skips the baseline entirely.

This means the cardinal rule is simple: never squash migrations that have already been deployed. Only squash the earliest migrations — the ones so old that nobody needs to build a database at that intermediate version anymore. Leave recent migrations untouched.

Performance Gains Are Real

A benchmark from Mike Kowalski demonstrated concrete improvements: a project with 35 migration files saw a 64% speedup (about 500ms saved) after squashing. A project with 50 files saw roughly a 20% speedup (about 1,700ms saved). The gains aren't linear — they depend heavily on whether migrations are thrashing (creating and dropping tables) or purely additive.

The practical threshold where squashing becomes worthwhile is somewhere north of 50 files, closer to 100. Below that, the cognitive overhead of understanding schema history is usually a bigger problem than performance.

The Documentation Gap

One area where most squashing guides fall short is generating diagrams and documentation at version boundaries. The tools exist — dbdocs, SchemaCrawler, pg_dump with schema-only flags — but there's no established workflow for creating a "schema changelog" that shows human-readable diffs between versions.

The ideal output would be something like: "We split the users table into users and user_profiles, and added a foreign key from orders to a new payment_methods table." Raw SQL diffs don't give you that. Getting semantic diffs requires schema refactoring detection, and the tooling just isn't mature yet.

The Linear vs. Snapshot Tension

The reason migration tools default to linear history isn't that nobody thought of snapshots — it's that snapshots don't solve the data migration problem. When version two splits the users table into users and user_profiles, the migration doesn't just change the schema; it moves the data. A snapshot tells you the final shape but not how to transform your data from version one to version two.

Squashing is the compromise: keep the linear model for recent history where data transformations matter, and collapse ancient history into a snapshot where nobody needs to replay those transformations anyway.

Practical Advice

Squash old migrations aggressively, keep recent ones intact, and accept that the linear model serves a real purpose for anything still in play. And don't delete the old migrations — keep them in a separate folder or version control tag. If you ever need to reproduce a bug that existed against an old schema version, you'll be glad you did.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2506: Squashing Database Migrations Without Breaking Production

Corn
Daniel sent us this one — he's been staring at a codebase with about sixty migration files and he's wondering why we all just accept that as normal. His core question is: why not treat database schema evolution the way we treat codebase versioning? Compact a batch of migrations up to a certain point, call that a version, generate diagrams and documentation at that boundary, and track big-picture structural changes instead of every tiny alter-table tweak. He wants to know how to actually do this without breaking things.
Herman
This is one of those things where the instinct is completely right but the tooling has been weirdly slow to catch up. By the way, today's episode script is being generated by DeepSeek V four Pro.
Corn
So where do we even start with this? Because I think Daniel's frustration is something a lot of developers feel but don't articulate — you open a migrations folder and it's just this archeological dig of every schema decision anyone made over three years.
Herman
Most of those decisions are irrelevant now. The migration that added a column you later dropped in migration forty-two — why are we still carrying that around? It's not history, it's clutter. The actual history that matters is: what did the schema look like at version one-point-oh, what did it look like at version two-point-oh, and what changed between them.
Corn
Here's the tension I see immediately. Migration tools — Flyway, Liquibase, Prisma, whatever — they're fundamentally linear. They want to replay history step by step. Squashing is a bolt-on operation. It's not native to how these tools think about the world.
Herman
And that's exactly why this is a more interesting question than it sounds. You're fighting the grain of the tooling. GitLab is actually one of the few projects that built squashing into their workflow natively — they introduced automated migration squashing back in version sixteen-point-three in twenty twenty-three, and now it runs automatically at scheduled milestones. There's a Rake task, bundle exec rake gitlab colon db colon squash, that removes old migrations, updates the structure file, cleans up specs and RuboCop TODOs. It's fully automated.
Corn
GitLab's been doing this for about three years now. What's their cadence? When do they actually trigger a squash?
Herman
They do it at specific milestones — versions ending in point-two, point-five, point-eight, and point-eleven. So it's a scheduled ritual, not an event-driven one. That's actually a pretty smart way to do it, because it removes the subjective "do we have too many migrations yet" debate. The answer is: it's on the calendar, we squash at these points, end of discussion.
Corn
I like that. It takes the decision off the table. But I want to dig into the mechanics, because I think that's where people get nervous. Daniel asked specifically how to do this without breaking things, and I think the fear is real — you squash fifty migrations into one baseline file, and suddenly you're worried that production is going to try to reapply something it already applied.
Herman
This is the most important thing to understand, and it's actually simpler than people think. The core technique is tool-agnostic. Step one: apply all your existing migrations to a fresh database instance — you can use something like Testcontainers for this. Step two: dump the resulting schema as raw SQL. Step three: create a single baseline migration file from that dump. Step four: update the migration history table to mark all those prior migrations as already applied. Step five: delete the old migration files.
Corn
The safety mechanism for production?
Herman
Flyway's baseline migration concept is the canonical example here, and it's elegant. A baseline migration file — prefixed with B instead of V, like B two fifty underscore underscore — is a single script that builds a particular version of the database. On an empty database, Flyway applies the B file then all subsequent V files. But on an existing production database that already has a schema history table, Flyway skips the baseline entirely. It sees that it's already at or above that version. So the production database never touches the squashed file.
Corn
That's the key insight. The baseline only runs on fresh builds. Production doesn't care. It's already past that point.
Herman
And this is why the cardinal rule of squashing is: never squash migrations that have already been deployed. The safe pattern is to squash only the earliest migrations — the ones so old that nobody needs to build a database at that intermediate version anymore. Leave recent migrations untouched.
Corn
You're drawing a line in the sand. Everything before this point gets compacted into a baseline. Everything after stays as individual migrations. And you never, ever squash something that's currently running in production somewhere.
Herman
And if you think about it, this maps perfectly onto how we treat code. You don't go back and rewrite git history for commits that are already on the main branch and deployed. You might squash feature branches before merging, but once it's merged, that history is frozen. Same principle here.
Corn
I want to talk about the performance angle, because I think that's what actually motivates a lot of teams to do this. Sixty migration files doesn't just look messy — it's slow.
Herman
There was a really nice benchmark from Mike Kowalski a few years back. He tested squashing using Testcontainers. A project with thirty-five migration files saw a sixty-four percent speedup after squashing — about five hundred milliseconds saved. A project with fifty files saw about a twenty percent speedup, roughly seventeen hundred milliseconds saved.
Corn
The gains are real but they're not linear. Fifty files doesn't give you twice the benefit of thirty-five.
Herman
Right, because it depends on what the migrations are actually doing. If you have migrations that create tables and later migrations that drop those same tables, squashing eliminates all that wasted work. The new baseline just creates the final state. But if your migrations are mostly additive — just building up the schema without thrashing — the speedup is smaller.
Corn
Which actually raises a point about when squashing is worth doing at all. If you've got fifteen migrations and they're all cleanly additive, the overhead is negligible. You're solving a problem you don't have.
Herman
Kowalski's advice was pretty direct: when the number of migrations is low, this approach may be overkill. I think the threshold where people start feeling real pain is somewhere north of fifty files, maybe closer to a hundred. At that point, the cognitive overhead of understanding the schema history is as big a problem as the performance hit.
Corn
Let's talk about the documentation side, because Daniel specifically mentioned generating diagrams and documentation at version boundaries. This is the part where I think most squashing guides fall short. They'll tell you how to dump the SQL and create the baseline, but they say almost nothing about automatically generating an ER diagram or schema documentation at each version point.
Herman
This is genuinely an underexplored area. The tools exist — dbdocs, SchemaCrawler, even just pg underscore dump with the schema-only flag piped into a diff tool — but there's no established best practice for what I'd call a schema changelog as documentation. Nobody has really productized the idea of "here's what the schema looked like at version two-point-oh, here's the ER diagram, and here's a human-readable diff from version one-point-oh.
Corn
That's exactly what Daniel wants. He wants to look at a version and see the big-picture structure, not forty individual alter-table statements.
Herman
I think the closest thing to a workflow here would be: at each squash point, you generate a schema dump, you run a schema visualization tool against that dump, you generate a diff against the previous version's dump, and you commit all of that — the baseline SQL, the ER diagram, the diff — as a versioned artifact. It's basically treating your schema documentation the way you'd treat a changelog.
Corn
The diff is the interesting part to me, because a raw SQL diff between two schema dumps is not human-readable in the way Daniel wants. He wants to see "we split the users table into users and user underscore profiles, and we added a foreign key from orders to a new payment underscore methods table." That's not what a SQL diff gives you.
Herman
No, a SQL diff gives you a wall of alter-table statements. To get the kind of semantic diff Daniel wants, you'd need something that understands the schema at a higher level — something that can say "this column moved from table A to table B" rather than "column X was dropped from A and added to B." That's a much harder problem. It's essentially schema refactoring detection, and the tooling is just not mature.
Corn
We're in this awkward middle ground. The mechanics of squashing are well understood and safe. The documentation layer is still very manual. You can generate the artifacts, but making them readable and useful is extra work.
Herman
I think this is where the "versioned snapshot versus linear history" tension really comes into focus. The migration tooling world has settled on linear history as the default. It works, it's reliable, it's well understood. But Daniel's instinct — that what he actually wants is a series of tagged, documented snapshots — is fundamentally a different mental model. It's more like Git tags than Git commits.
Corn
Should the industry move toward that model? A more Git-like approach to database versioning where you can check out any version directly rather than replaying history?
Herman
I think the answer is yes, but with a big caveat. The reason migration tools are linear is that database state is stateful in a way that code is not. You can check out an old commit of a codebase and it just works — the files are different. But checking out an old version of a database schema when you have live data is a fundamentally harder problem. The linear migration model exists because it solves the data migration problem: each step tells you how to transform both the schema and the data.
Corn
That's the piece that a pure snapshot model doesn't handle. If version two splits the users table into users and user profiles, the migration doesn't just change the schema — it moves the data. A snapshot of version two's schema doesn't tell you how to get your data from version one's shape to version two's shape.
Herman
And this is why Flyway, Liquibase, and pretty much everyone else settled on the linear model. It's not that they didn't think of snapshots. It's that snapshots don't solve the data problem. Squashing is a compromise — you keep the linear model for recent history where data transformations matter, and you collapse ancient history into a snapshot where nobody needs to replay those transformations anyway.
Corn
The practical advice for Daniel is: squash the old stuff aggressively, keep recent migrations intact, and accept that the linear model is serving a real purpose for anything that's still in play.
Herman
And I'd add one more thing: don't delete the old migrations. Keep them in a separate folder, or in a tag in version control, or somewhere you can get at them if you really need to. Because there is a real risk with squashing, and it's the "squash and forget" problem.
Corn
What's the scenario where you'd actually need those old migrations?
Herman
Say you're trying to reproduce a bug that existed in production six months ago, and the schema at that point was version one-point-two-point-three. If you've squashed everything before version two into a single baseline, you can't rebuild version one-point-two-point-three exactly. You can only build the final squashed state. Now, is this a common scenario? Most debugging happens against recent schema versions. But when you need it, you really need it.
Corn
The trade-off is: you accept a tiny risk of not being able to perfectly reconstruct an ancient intermediate state, in exchange for a much cleaner and faster development experience. That seems like a good trade-off for most teams.
Herman
I think so. And you can mitigate the risk by just archiving the files instead of deleting them. There's no law that says squashing means permanent deletion. You can move the old migrations to an archive directory that's excluded from your migration runner. They're still there if you need them, but they're not in the critical path.
Corn
Let's talk about semantic versioning for databases, because Daniel mentioned versioning the database the way you'd version a codebase. This is actually a thing.
Herman
Flyway's documentation describes a full semantic versioning system for databases — version number formats, storage locations, the whole thing. And then there's SchemaVer, which Snowplow introduced specifically for data schemas. The idea is that schema versions have semantic meaning: a major version bump means a breaking change, a minor bump means additive changes that are backward compatible, and a patch bump means a fix that doesn't change the structure.
Corn
That maps onto the squashing question pretty naturally. A major version is a squash point. You compact everything from the previous major version into a baseline, and you start a new migration chain for the new major version.
Herman
That's exactly how I'd structure it. Version one-point-oh gets a baseline. All the migrations from one-point-one through one-point-nine are individual files. When you ship version two-point-oh, you squash everything from one-point-oh through the last one-point-x into a new baseline, and you start fresh. The baseline file for version two is the complete schema as it stands at the two-point-oh boundary.
Corn
The documentation artifacts — the ER diagram, the schema dump, the changelog — they get generated at each major version boundary. That's the deliverable Daniel wants.
Herman
And I want to come back to something from the Bytebase best practices, because I think they provide a useful framework for how to actually operationalize this. Their first recommendation is to prefer migration-based version control over state-based. That's the linear model we've been talking about. But they also emphasize atomic commits — one change per file, one change per ticket. Version all artifacts, not just tables but procedures, views, permissions. Automate testing and validation. Document all changes with links to issue trackers. Plan for rollbacks.
Corn
The rollback piece is interesting in the context of squashing. If you've squashed fifty migrations into a baseline, you can't roll back an individual change from migration twenty-three. You can only roll forward.
Herman
That's actually fine for squashed migrations, because by definition they're so old that nobody would want to roll them back. The rollback planning should focus on recent migrations — the ones you haven't squashed. For those, you either write down-migrations or you design changes to be backward compatible so you can deploy a fix forward.
Corn
The backward-compatible approach is underrated. Instead of writing a migration that drops a column, you write a migration that stops using the column, then a later migration that actually drops it. If something goes wrong with the first change, you just start using the column again. No rollback needed.
Herman
That's the expand-and-contract pattern. It's slower, but it's much safer. And it works beautifully with squashing because by the time you squash, the contract phase is long done and the column is gone. The baseline just doesn't include it.
Corn
We've covered the mechanics, the safety considerations, the documentation gap, and the versioning model. What about the organizational side? When should a team actually decide to squash? We mentioned GitLab's scheduled approach, but I think a lot of smaller teams don't have the discipline to put it on a calendar.
Herman
There are a few common triggers. One is the migration count threshold — some teams squash when they hit two hundred files, which is a number I've seen mentioned in Flyway community discussions. Another is the major release boundary, which aligns with the semantic versioning approach. A third is the "we noticed CI is slow" trigger — when the migration step in the test suite starts taking noticeably longer, that's a signal.
Corn
I'd add a fourth: the "new team member is confused" trigger. When someone joins the team and asks "why are there eighty migration files and which ones actually matter," that's a sign you should have squashed forty migrations ago.
Herman
That's a really good one, because it captures the cognitive overhead problem. The performance gains from squashing are nice, but the clarity gain is the real win. A new developer can look at three baseline files and twenty recent migrations and understand the schema history. They can't do that with a hundred and twenty individual migration files.
Corn
The communication piece matters too. When you squash, you need to tell the team: we've created a new baseline at version X, the old migrations have been archived, and from now on all new migrations should be written against the squashed schema. Otherwise someone's going to try to write a migration that depends on a table that only existed in migration forty-seven, and they won't understand why it fails.
Herman
GitLab handles this with their automated squash pipeline — it's not just a script someone runs locally, it's a merge request that goes through review. The whole team sees it. I think that's the right approach for any team above about three people. Don't let squashing be a side project someone does on a Friday afternoon. Make it a tracked, reviewed change.
Corn
Let's talk about some concrete tooling options, because Daniel asked how to actually do this. What's the practical path for someone using a typical stack — say, PostgreSQL with a migration framework like Flyway or a Rails-style migration system?
Herman
If you're using Flyway, it's built in. You create a baseline migration with the B prefix, you point it at your squashed schema dump, and Flyway handles the rest. The documentation from Redgate is excellent on this — they walk through the whole process with SQL Server examples, but it works the same way for PostgreSQL.
Corn
If you're not using Flyway? Say you're in a Rails shop with Active Record migrations?
Herman
Then you're doing it more manually, but the principle is the same. You can use Testcontainers to spin up a fresh database, run all migrations, dump the schema with pg underscore dump, and replace your migration files with a single SQL file that represents the squashed state. The tricky part is updating the schema migrations table so Rails knows those migrations have already been applied. You need to insert rows into that table for every squashed migration.
Corn
That's the part where people get nervous, because messing with the schema migrations table feels like surgery. But it's actually straightforward — you're just telling the framework "these have already run, don't run them again.
Herman
You can verify it by running the migration task on a fresh database. If it applies your baseline and then stops, you've done it right. If it tries to apply migrations you thought you squashed, something went wrong.
Corn
I want to circle back to the documentation piece one more time, because I think there's a practical workflow Daniel could adopt right now that doesn't require waiting for better tooling. At each squash point, you run a schema visualization tool — something like SchemaSpy or dbdocs — against the squashed database. You generate an ER diagram as an image or a PDF. You generate a markdown file that describes each table and its purpose. And you commit those alongside the baseline SQL.
Herman
You tag that commit with the version number. So if you ever want to see what the schema looked like at version two-point-oh, you check out the tag and open the diagram. You don't need to replay migrations or spin up a database. The documentation is right there.
Corn
The one thing I'd add is a human-written summary of what changed. The auto-generated diff between version one and version two might show you that the users table was split, but it won't tell you why. A paragraph in the changelog that says "we split users into users and user profiles to support multiple profile types per user" is worth more than any auto-generated artifact.
Herman
A hundred percent. And that's the thing that doesn't get automated. The tooling can show you what changed, but only a human can explain why it changed and what the implications are. If Daniel wants his versioned schema documentation to actually be useful six months later, he needs to write those summaries.
Corn
The full workflow Daniel is describing — compact migrations into versions, generate diagrams and docs, track big-picture evolution — is achievable. It's just not fully automated. The squashing part is automatable. The diagram generation is semi-automatable. The semantic diff and the rationale documentation are manual. And that's probably fine for most teams.
Herman
I think it's better than fine. The manual part — writing the summary of what changed and why — is actually the most valuable part of the whole exercise. It forces you to articulate the design decisions. That's useful for the person writing it, not just for future readers.
Corn
It creates a record that's useful for onboarding, for architectural reviews, for understanding technical debt. A folder full of migration files doesn't do any of that.
Herman
Let me add one more concrete recommendation from the Bytebase best practices that I think ties this together: track every change via audit logs. When you squash, the squash itself should be a logged event. You should be able to answer the question "when did we move from individual migrations to a baseline at version two-point-oh, and who approved it?" If you can't answer that, your squashing process isn't mature enough.
Corn
That's a good litmus test. If the squash feels like a secret you're hoping nobody notices, you're doing it wrong. If it's a documented, reviewed, communicated change, you're doing it right.
Herman
Now: Hilbert's daily fun fact.
Corn
The collective noun for a group of porcupines is a prickle.
Herman
If I'm a developer who wants to adopt this approach, what's my Monday morning action plan? I think step one is to figure out where your squash boundary should be. Look at your migration history. Find a point where the schema reached a stable, meaningful state — a major release, a significant refactor, whatever makes sense for your project. Everything before that point is a candidate for squashing.
Corn
Step two: verify that none of those old migrations are still being applied to production databases. If you have a staging environment that gets rebuilt from scratch regularly, check whether it replays all migrations. If it does, you need to update that process to use the new baseline.
Herman
Step three: actually perform the squash on a branch. Spin up a clean database, run all migrations, dump the schema, create the baseline file, update the history table, archive the old migrations. Test it by building a fresh database from the baseline plus remaining migrations.
Corn
Step four: generate your documentation artifacts. The ER diagram, the schema dump, the changelog summary. Commit all of it. Tag the commit with the version number.
Herman
Step five: get it reviewed. This is not a solo operation. Someone else needs to verify that the squashed schema matches what they expect, that no migrations were accidentally included or excluded, and that the documentation is accurate.
Corn
Step six: communicate. Tell the team what you did, where the archive is, what the new workflow looks like for writing migrations going forward. If you're using a semantic versioning scheme, make sure everyone understands what constitutes a major versus minor versus patch change to the schema.
Herman
One thing I want to emphasize about the archive: don't just leave the old migration files in a folder called "archive" in the same repository. That defeats the purpose — your migration runner might still pick them up, or someone might accidentally modify them. Move them to a completely separate location, or better yet, rely on version control history. Tag the commit before the squash, and if you ever need the old migrations, you can check out that tag.
Corn
That's cleaner. The old migrations live in git history, not in the current working tree. They're accessible but not in the way.
Herman
And this is really just applying the same discipline to database schemas that we already apply to code. You don't keep every deleted function in your source tree just in case. You rely on version control.
Corn
I think the bigger takeaway here is that Daniel's instinct is correct, and the industry is slowly catching up to it. The idea that you should be able to see the big-picture evolution of your database schema without wading through sixty migration files — that's not a niche desire. It's a basic expectation that the tooling should support.
Herman
The tooling is getting there. GitLab's automated squashing is a real thing in production. Flyway's baseline migrations are a mature feature. The Testcontainers approach works for any stack. The pieces exist. What's missing is the documentation layer — the automated generation of human-readable schema changelogs — and the industry-wide adoption of the practice as a default rather than an exception.
Corn
The interesting question to me is whether we'll see a new generation of migration tools that treat the versioned snapshot as the primary artifact and the linear migration as the implementation detail, rather than the other way around. That would be a real paradigm shift.
Herman
I think we might, but I also think the data migration problem puts a hard limit on how far you can push the snapshot model. At the end of the day, transforming a live database from one schema to another requires knowing the steps. A snapshot of the destination doesn't tell you how to get there from where you are.
Corn
Unless you pair the snapshot with a declarative migration system — you describe the desired state and the tool figures out the steps. That's essentially what Terraform does for infrastructure. There's no reason the same approach couldn't work for databases, at least for schema changes that don't involve complex data transformations.
Herman
There are tools that do this — Atlas, Skeema, a few others. They compare the desired state to the current state and generate the migration automatically. It's a different philosophy. You don't write migrations at all. You write the target schema, and the tool generates the migration.
Corn
That model actually fits Daniel's versioning approach beautifully. Each version is a target schema. The tool generates the migration from the previous version. The documentation is the schema itself, plus whatever diagrams you generate from it.
Herman
The trade-off is that you lose fine-grained control over the migration. For complex data transformations, you still need to write custom migration logic. But for structural changes — adding columns, creating tables, adding indexes — the declarative approach works well.
Corn
The landscape is: traditional linear migrations with manual squashing for the old stuff, declarative schema management for teams that want to skip the migration-writing step entirely, and a middle ground where you use both — declarative for structural changes, imperative migrations for data transformations.
Herman
Regardless of which approach you pick, the documentation practice is the same. Version your schema. Generate diagrams at version boundaries. Write human-readable summaries of what changed and why. Archive or tag the artifacts so they're discoverable later.
Corn
Daniel, I think the short answer to your question is: yes, you can absolutely do this, and you should. Squash your old migrations into versioned baselines. Generate documentation at each version boundary. Archive the original files in version control history rather than in your working tree. The mechanics are well understood and safe if you follow the cardinal rule — never squash migrations that are still being applied to live databases.
Herman
The slightly longer answer is: the tooling for the documentation side is still immature, so expect to do some manual work there. But that manual work — writing summaries, explaining design decisions — is valuable. It's not busywork. It's the difference between a schema history that's technically accurate and one that's actually useful.
Corn
One forward-looking thought: I suspect that in a few years, the idea of keeping sixty individual migration files in your codebase will seem as quaint as keeping every compiled object file in your source tree. The tools will catch up, the practices will standardize, and versioned schema snapshots with generated documentation will be the default. Daniel's just ahead of the curve.
Herman
Thanks to our producer Hilbert Flumingtop for making this episode happen. This has been My Weird Prompts, the human-AI podcast collaboration. Find us at myweirdprompts dot com or wherever you get your podcasts.
Corn
If you enjoyed this, leave us a review — it helps other people find the show. We'll be back soon.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.