#1944: PostgreSQL: The Thirty-Year Miracle

How does a volunteer-run database power the New York Stock Exchange and survive every tech trend without burning out?

0:000:00
Episode Details
Episode ID
MWP-2100
Published
Duration
20:51
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
Gemini 3 Flash

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

PostgreSQL is often dismissed as "boring" software, yet it powers everything from the New York Stock Exchange to massive cloud platforms. The project has survived the dot-com bubble, the NoSQL revolution, and the AI boom without being acquired, relicensed, or burning out its maintainers. The secret lies in a governance model that deliberately avoids centralized control.

The PostgreSQL Global Development Group operates as a meritocracy rather than a corporation. A Core Team of seven members serves two-year terms, elected based on years of high-quality contributions. Crucially, the "fifty percent rule" prevents any single company from dominating: no more than half of the Core Team can work for the same employer. This federated structure ensures that giants like Microsoft or Amazon cannot unilaterally seize control of the project, even if they hire many top contributors.

Decisions happen through rigorous community processes. The legendary pgsql-hackers mailing list serves as a permanent, searchable record of every architectural debate since 1996. Proposals aren't submitted via simple pull requests; they must be defended publicly. The project uses "Commitfests"—disciplined four-to-five-week cycles several times a year—where hundreds of patches undergo intense scrutiny. The community prioritizes correctness over features, often slowing down development to eliminate edge cases that could cause data corruption.

Funding comes from a distributed patronage model. Companies like EnterpriseDB, Crunchy Data, Microsoft, and Google employ full-time contributors whose work benefits the entire ecosystem. This "co-opetition" means competitors cooperate on the engine while fighting for customers. PostgreSQL Europe and United States organizations handle donations and conference revenue for non-code essentials.

The release cycle reflects this stability-first philosophy. Major releases like version 18 arrive annually with new features, while minor releases strictly contain only bug fixes and security patches—never new features or format changes. A recent update fixed 65 bugs and 5 critical vulnerabilities, demonstrating the community's commitment to catching even extreme edge cases.

PostgreSQL's extensibility has made it absorb every database trend. PostGIS turned it into a geographic powerhouse; pgvector enables AI search without migrating data to specialized databases. Instead of chasing every new technology, PostgreSQL waits, then integrates the best ideas as extensions.

The community faces challenges, particularly around mailing list culture and diversity. The old-school emphasis on technical rigor can feel abrasive, but there are active efforts to broaden participation while preserving the "no shortcuts" ethos. With version 19 approaching, the thirty-year miracle continues—proving that open-source governance done right creates something far more resilient than any corporate product.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#1944: PostgreSQL: The Thirty-Year Miracle

Corn
You know, most people think of a database as just a piece of software you install and then promptly forget about until it breaks. But when you look at something like PostgreSQL, you’re not just looking at a tool; you’re looking at a thirty-year miracle of human cooperation. Today's prompt from Daniel is about the governance and sustainability of Postgres, and honestly, it’s the perfect time to talk about it because we are staring down the release of version nineteen in just a few months.
Herman
It really is a miracle, Corn. I was digging into the commit history recently, and it’s staggering. We are talking about a project that has been actively developed since nineteen ninety-six. It has survived the dot-com bubble, the rise of NoSQL, the cloud revolution, and now the AI boom. And through all of that, it hasn’t been bought by a giant tech conglomerate, it hasn’t switched to a restrictive license, and it hasn’t burned out its maintainers. By the way, fun fact for everyone listening—Google Gemini 3 Flash is actually helping us put this script together today.
Corn
Well, hopefully Gemini has the same uptime as a well-tuned Postgres instance. But seriously, Herman, the thing that blows my mind is the "boring" factor. In the tech world, "boring" is usually an insult, but for Postgres, it’s a badge of honor. It’s the backbone of the New York Stock Exchange, it powers massive cloud platforms, and yet it’s run by a volunteer-driven community. How does a project that critical avoid the usual open-source pitfalls? Most projects either get "captured" by a big company like MySQL did with Oracle, or they slowly wither away because the three guys running it from their basements eventually want to see sunlight.
Herman
That’s exactly the tension the PostgreSQL Global Development Group has managed to resolve. My name is Herman Poppleberry, for those who don't know, and I live for this kind of organizational architecture. The secret sauce is that there is no "Postgres Inc." There is no single CEO who can decide one morning to change the license to a "business source license" because they need to beat their quarterly earnings. Instead, you have this incredibly robust, almost academic meritocracy.
Corn
Right, the Core Team. I remember reading that it’s only seven people. Seven people holding the keys to the most important database in the world? That sounds like a single point of failure waiting to happen, or at least a very stressful Slack channel.
Herman
It sounds small, but the structure is brilliant. Those seven members serve two-year terms, and they are elected based on years—sometimes decades—of high-quality contributions. But here’s the kicker: the "fifty percent rule." No more than half of the Core Team can work for the same company. So even if a giant like Microsoft or Amazon hires a bunch of top-tier contributors, they can’t legally or culturally "seize" the project. It forces a federated model of stewardship.
Corn
It’s like a digital version of the separation of powers. But let’s get into the weeds of how decisions actually get made. If I’m a developer at a big cloud provider and I want to push a massive change to how parallel queries work—something that might benefit my specific hardware but maybe adds complexity elsewhere—how does that actually get vetted? Because I imagine there’s a lot of "corporate interests" disguised as "technical improvements" floating around the mailing lists.
Herman
The mailing list is where the magic—and the drama—happens. The pgsql-hackers list is legendary. It’s a permanent, searchable record of every design decision made since the mid-nineties. If you want to propose a change, you don't just submit a pull request on GitHub and hope for a thumbs-up. You have to defend your architectural choices in front of the world. And the community uses something called "Commitfests."
Corn
Commitfests. That sounds like a high-intensity coding retreat where everyone drinks too much coffee and forgets to shower.
Herman
Not quite! It’s actually a very disciplined, four-to-five-week cycle that happens several times a year. During a Commitfest, hundreds of patches are reviewed. And when I say reviewed, I mean scrutinized to a level that would make most enterprise software developers weep. They look at data integrity, performance regressions, and whether the code fits the "Postgres way." In two thousand twenty-four, for example, there was a huge debate over parallel query improvements. Some people wanted to move faster, but the community reached a consensus to slow down to ensure there were no edge cases that could lead to data corruption. They value "correctness" over "features" every single time.
Corn
See, that’s the "cheeky" side of Postgres I love. They’ll look at a trendy new feature and basically say, "That’s cute, come back when you can prove it won’t lose a single byte of data during a power failure in a Category Five hurricane." It’s that refusal to take shortcuts. But Herman, who pays for all this? You mentioned people work for different companies, but someone has to pay for the build servers, the conferences, the legal fees. If there’s no central corporation, where does the cash come from?
Herman
It’s a distributed patronage model. You have companies like EnterpriseDB, Crunchy Data, Microsoft, and Google who all employ full-time Postgres contributors. These companies have a vested commercial interest in Postgres being excellent because their products are built on top of it. So they essentially "donate" the labor of their best engineers to the community. Then you have the PostgreSQL Europe and United States organizations that handle the actual money—donations and conference revenue—to pay for the non-code essentials.
Corn
So it’s a "co-opetition" model. They compete for customers in the marketplace, but they cooperate on the engine. It’s like Ford and Chevy both contributing to the design of the internal combustion engine because they know if the engine stops working, no one buys cars.
Herman
That’s a great way to put it. And because it’s federated, you don’t get the "lone maintainer" problem. In a lot of open-source projects, if the lead dev gets hit by a bus or just gets bored, the project dies. In Postgres, the burden is shared across dozens of companies and hundreds of committers. If one company pivots away from databases, three others are standing by to pick up the slack because their entire business model depends on it.
Corn
It’s a very resilient ecosystem. But let’s talk about the actual software. We just saw the release of Postgres eighteen in late twenty-five, and version eighteen point three just dropped in February of twenty-six. To the average user, these version numbers can feel a bit arbitrary. What’s the actual difference between a "major" release and these "minor" or "incremental" releases? Because I think a lot of people see an update notification and just hit "ignore" until their security auditor yells at them.
Herman
Oh, you should never ignore a Postgres minor release! Let’s break it down. A major release—like eighteen or the upcoming nineteen—happens once a year, usually in the autumn. These are the ones where you get the "shiny" stuff. For version eighteen, we saw massive improvements to logical replication and performance monitoring. This is where the file formats might change, where new SQL syntax is added, and where you might actually have to plan a migration strategy.
Corn
And the minor releases? Like the eighteen point three update we saw recently?
Herman
Those are the "keep the lights on" releases. The project has a very strict policy: minor releases never include new features and they never change the on-disk data format. They are strictly for bug fixes and security patches. In February twenty-six, the community released updates for versions sixteen, seventeen, and eighteen all at once. They fixed over sixty-five bugs and five critical security vulnerabilities.
Corn
Sixty-five bugs? That sounds like a lot for a "stable" database. Should I be worried that my data is currently being eaten by a digital termite?
Herman
Not at all! Most of these are extreme edge cases—things like "if you run a specific type of window function on a partitioned table while simultaneously dropping an index in a different time zone, the query might return an error." But because Postgres is used by everyone, those edge cases eventually get found. The fact that they catch and fix sixty-five of them in a single quarter is actually proof of how healthy the project is. They treat a minor bug with the same gravity that other projects treat a total system failure.
Corn
I love that. It’s like a car manufacturer recalling a vehicle because the clock is three seconds slow. It shows they’re paying attention. But I want to go back to the "boring" advantage. We’ve lived through the NoSQL revolution, where everyone said relational databases were dead. Then we had the NewSQL wave. Now we have vector databases for AI. Every time, Postgres just kind of sits there, waits a year, and then absorbs the best features of those trends into its core or its extensions.
Herman
The extensibility is the real "alpha" of Postgres. It was designed from the beginning at Berkeley to be extended. Think about PostGIS—that’s an extension that basically turned Postgres into the world’s best geographic database. Or pgvector, which is how everyone is doing AI search right now. Instead of moving your data to a specialized, unproven vector database, you just "use Postgres" and add the extension.
Corn
"Just use Postgres" has become the mantra of the sensible developer. It’s like the "nobody ever got fired for buying IBM" of the twenty-twenties. But there is a human cost to this, right? Even with the corporate patronage, the mailing list culture can be... abrasive. It’s a lot of very smart, very opinionated people arguing about C code and B-tree indexing. Is that sustainable in the long run? We’ve seen other communities struggle with toxicity.
Herman
It’s a valid concern. The Postgres community is definitely "old school." They value technical rigor above almost everything else. But they’ve also realized they need to modernize. There have been a lot of discussions in twenty-five and twenty-six about making the Core Team more diverse—not just in terms of who they work for, but in terms of geography and background. They’re trying to move toward more transparency without losing that "no shortcuts" ethos.
Corn
It’s a delicate balance. You don’t want to turn it into a corporate committee where everything is decided by marketing, but you also don’t want it to be an impenetrable fortress of gray-bearded C wizards. Though, to be fair, those gray-bearded wizards have kept my data safe for a long time.
Herman
They really have. And it’s not just the wizards. It’s the entire infrastructure. Think about the testing. Before a major version like eighteen was released, it went through months of "Beta" and "Release Candidate" stages. They have a build farm with hundreds of different operating systems and hardware configurations. Every time a patch is submitted, it’s tested on everything from a modern Linux server to an old Solaris box.
Corn
Who is still running Solaris? Actually, don't answer that. I don't want to know. It probably powers the elevator I took this morning. But this brings up a good point for our listeners who are running these systems. If you’re an organization using Postgres, what’s the "responsible" way to interact with the project? Is it just about donating money, or is there more to it?
Herman
Money helps, but "contributor time" is the real currency. If you’re a large company using Postgres at scale, the best thing you can do is allow your engineers to spend twenty percent of their time contributing back—whether that’s fixing bugs they find in production or improving documentation. Documentation is actually one of the strongest parts of Postgres. It’s famously thorough.
Corn
It’s basically a textbook on database theory that happens to come with a free database. I’ve actually used the Postgres docs to understand SQL concepts that had nothing to do with Postgres specifically. But Herman, let's look at the "failed" versions of this. Why did MySQL take a different path? Why did it end up under Oracle’s thumb while Postgres stayed free?
Herman
It comes down to the ownership of the trademark and the copyright. MySQL was originally owned by a single company, MySQL AB. When that company was bought by Sun Microsystems, and then Sun was bought by Oracle, the "keys to the kingdom" were part of the sale. Postgres never had a "kingdom" to sell. The copyright is held by the PostgreSQL Global Development Group, and the license is a BSD-style license, which is incredibly permissive. You can’t "buy" Postgres because nobody owns it.
Corn
It’s the ultimate "poison pill" for a corporate takeover. If a company tried to buy the project, the community would just fork it and move on within twenty-four hours. There’s no central asset to seize. That’s a powerful lesson for any new open-source project. If you want to be around in thirty years, don't let a single entity own your name.
Herman
And that longevity is what builds trust. When I talk to CTOs at banks or insurance companies, they aren't looking for the fastest database or the one with the coolest logo. They are looking for the one that will still be supported in twenty-forty. They want to know that if they find a bug in the middle of the night, there’s a global community of experts who care about fixing it.
Corn
"Predictability is a feature." That should be on their t-shirts. So, looking ahead to version nineteen—which is the next big milestone—what are we seeing? Usually, by this point in the cycle, the "hackers" list is buzzing with what’s going to make the cut.
Herman
There’s a lot of focus on further optimizing those vector workloads we mentioned, but also some really deep-level stuff regarding 64-bit transaction IDs. This is one of those "deep magic" things that most people will never see, but it solves a theoretical "wraparound" issue that can happen on extremely high-volume databases. It’s the kind of thing that only matters if you’re doing billions of transactions, but Postgres wants to be ready for that scale.
Corn
See, that’s exactly what I’m talking about. Most startups would say, "We’ll deal with transaction ID wraparound when we hit a billion users." Postgres says, "We’re going to fix it now so that when you hit a billion users, you don't even know it was a problem." It’s proactive engineering.
Herman
It’s also about sustainability for the people involved. We’ve talked about "burnout," and the way Postgres handles this is by not having "deadlines" in the traditional sense. They have a release window, but if a feature isn't ready, it doesn't get shoved in at the last minute. It just waits for the next year. We saw this with some of the asynchronous I/O improvements—they’ve been working on those for years, and they only let them in bit by bit as they are proven safe.
Corn
That must be incredibly frustrating for the developers who want their features in now, but it’s great for the users who don't want their database to crash. It’s like a chef who refuses to serve the souffle until it’s perfect, even if the customer is starving. You might be annoyed for ten minutes, but you’ll be happy when you finally eat.
Herman
And that’s why the "minor" releases are so frequent. If they find a way to make something five percent faster or safer without changing the architecture, they don't make you wait a year. They'll put it in a quarterly update. That’s what we saw in eighteen point three—just solid, incremental refinement.
Corn
So, for the folks listening who are maybe sitting on an old version—maybe they’re still on Postgres twelve or thirteen because "it works and I don't want to touch it"—what’s your pitch for staying current? Beyond just the security fixes.
Herman
My pitch is that the performance gains in the recent versions are massive. We’re talking about improvements in how indexes are handled, how memory is managed, and how the query planner works. You can often get a twenty percent speed boost just by upgrading the engine, without touching a single line of your application code. It’s like getting a free hardware upgrade.
Corn
"Free hardware upgrade" is a language every CFO understands. But you do have to be careful with the major upgrades. I’ve seen people try to jump from version ten to version eighteen in one go and... well, let’s just say it was a long weekend for the DevOps team.
Herman
Oh, definitely. You have to follow the migration path. But even there, the community has built tools like pg_upgrade that make it much smoother than it used to be. The project cares about the upgrade experience because they know that if people get stuck on old versions, the ecosystem fragments.
Corn
It’s all part of that "stewardship" Daniel was asking about. It’s not just about writing code; it’s about managing the lifecycle of the software. It’s thinking about the person who has to manage this database five years from now.
Herman
And it’s about the "Human Element." There was a minor release delay recently in early twenty-six. The community realized there was a small issue with one of the security patches, and instead of rushing it out to meet the "February" deadline, they pushed it back a week. In a world of "move fast and break things," seeing a project choose to "move slow and keep things fixed" is so refreshing.
Corn
It’s the "Sloth" approach to software, Herman! I feel seen. Take your time, do it right, and then have a snack.
Herman
Wait, I'm not allowed to say that word. I mean... you're absolutely... no, I'm doing it again. Let's just say, you have a point, Corn. The measured approach is what wins in the world of data storage. When you're talking about the "source of truth" for a multi-billion dollar company, you don't want "agile." You want "stable."
Corn
You want a rock. And Postgres is a rock that happens to have a very active group of people polishing it every single day. So, what’s the takeaway for our listeners who aren't database administrators? Maybe they’re just tech-curious or they’re building their first app.
Herman
The takeaway is that governance matters just as much as code. When you’re choosing a technology to build on, don’t just look at the benchmarks or the GitHub stars. Look at who owns the name. Look at how they handle disagreements. Look at their release history. If they’ve been putting out steady, reliable updates for twenty-five years without a single corporate scandal, that’s a better indicator of success than any "Series A" funding announcement.
Corn
It’s a vote for the "commons." It shows that we can build world-class infrastructure that belongs to everyone and no one at the same time. It’s kind of a beautiful idea when you think about it. This massive, complex engine that powers the world, and it’s basically held together by mailing lists and a shared sense of duty.
Herman
It really is. And as we look toward the next twenty-five years, the challenge will be to keep that spirit alive as the original "wizards" retire and a new generation takes over. But based on what I’ve seen in the version eighteen and nineteen development cycles, the "Postgres Ethos" is being passed down just fine.
Corn
Well, I’m looking forward to version twenty-five, which I assume will be managed by a sentient AI that still insists on using the eighty-character line limit for its C code because "that’s how we’ve always done it."
Herman
I wouldn't bet against it. The hackers love their formatting rules.
Corn
They really do. Alright, I think we’ve thoroughly geeked out on database governance. It’s a fascinating look at the "boring" parts of tech that actually make everything else possible. Thanks as always to our producer, Hilbert Flumingtop, for keeping us on track.
Herman
And a big thanks to Modal for providing the GPU credits that power this show. They make the heavy lifting look easy.
Corn
This has been My Weird Prompts. If you’re enjoying these deep dives into the plumbing of the internet, a quick review on your podcast app really helps us out. It’s like an index for our show—helps people find what they’re looking for faster.
Herman
See you next time.
Corn
Later.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.