Alright, Herman. Today's prompt from Daniel is a really good one for us, because it gets at a fundamental tension in how we organize code. He's asking about monorepos versus multi-repos, and specifically, if you're someone who believes deeply in modularity—separate front-end, back-end, planning store—do monorepos actually support that kind of logical separation, or is it just one big, messy folder?
This is a fantastic question. And I think Daniel's hitting on a misconception that's really persistent. The short answer is yes, modern monorepos absolutely support what you could call access federation. They're not just a single folder blob. But the way they achieve that modularity is fundamentally different, and I'd argue more robust, than the physical separation you get with multiple repositories.
So you're saying the principle of separate concerns is actually better served by putting everything in one place? That feels counterintuitive.
It does, until you look at the mechanisms. Let's break it down. When you have a traditional multi-repo setup—your front-end in one Git repository, your back-end in another, your shared planning store in a third—you have physical separation. The boundaries are the repository walls. The problem is, those walls create what I call 'integration fog.' You can't easily see the connections between your modules. Dependency management becomes a manual, error-prone process of coordinating version bumps across repos.
And that's the merge debt and configuration drift we've talked about before. You update an API contract in the back-end repo, and now you have to remember to go update the front-end repo to match. If you forget, things break silently.
And whoops, I'm not supposed to say that word. Right. That's the core issue. Now, a monorepo inverts this. Instead of physical boundaries, you have logical boundaries enforced by tooling. The most basic form is workspace configurations. If you use pnpm, Yarn, or even npm, you can define a workspace. That tells the package manager, "Hey, these directories are separate packages, but they live together." You get a single node_modules folder at the root that intelligently links your local packages.
So the file system still shows them as separate directories—client, server, store—but the tooling understands their relationship. It's not one blob; it's a federation of modules with a treaty organization managing the borders.
It gets more sophisticated. Tools like Nx or Turborepo take this much further. They don't just manage dependencies; they build an explicit project graph. Nx, for example, uses project.json files in each module that declare not just what the module is, but what it depends on. You can then enforce boundaries with lint rules. You can say, "The front-end module is allowed to import from the shared store, but it is absolutely not allowed to import directly from the back-end database layer."
So you're codifying your architectural principles into the tooling itself. With separate repos, that's just a convention you hope everyone follows. With a monorepo and Nx, it's a build-breaking rule.
And that's a massive advantage for modularity. The boundary is clearer and more enforceable. Now, there's an even deeper layer with build systems like Bazel. Bazel 7.0, which came out late last year, is all about hermetic builds. It creates a sandbox for each build action, strictly declaring every input. In a monorepo, this means you can build and test your front-end in complete isolation from your back-end, even though they live in the same repository. The build system guarantees that a change in the back-end won't affect the front-end build unless you've explicitly declared that dependency.
That's fascinating. So the physical proximity actually gives you more control, not less. You're trading the illusion of safety you get from separate folders for a verifiable, tool-enforced contract.
That's the trade. Think of it like a well-run apartment building versus a suburban street of detached houses. The houses seem separate, but if the city needs to fix a shared water main, they have to dig up every lawn, and coordination is a nightmare. The apartment building has separate, soundproofed units with their own thermostats, but a central, managed infrastructure for utilities and security. The monorepo is the apartment building—you get privacy and managed integration.
That's a helpful analogy. So the practical implications are huge. Let's talk about a real-world example. Vercel manages Next.js, their Vercel CLI, and their dashboard all in a single monorepo. That's a massive front-end framework, a complex CLI tool, and a full-stack web application. They use strict module boundaries. The Next.js package doesn't randomly reach into the dashboard code. The tooling prevents it. But because they're all in one repo, when they need to make a cross-cutting change—say, updating a shared utility that both the CLI and the dashboard use—they can do it in a single atomic commit. Tests for both downstream modules run automatically.
In a multi-repo world, that single change becomes a three-ring circus. You update the utility in its own repo, publish a new version, then update the CLI repo to use that version, then update the dashboard repo to use that version, hoping nothing breaks in between.
And often, things do break in between, because the integration isn't atomic. This is where the second-order effects really kick in. Airbnb did a famous migration from a multi-repo setup to a monorepo in twenty twenty-four. Their reported result was a sixty percent reduction in CI build times. That's not just a convenience; it's a fundamental change in developer workflow.
How does putting everything in one place make builds faster? That seems backwards.
It's because of intelligent caching and task running. In a monorepo with Nx or Turborepo, the build system understands the project graph. If you only change code in the front-end module, it knows it doesn't need to re-build or re-test the back-end. It can skip entire swathes of work. In a multi-repo setup, your CI system is often blind to these relationships. It just runs whatever script is in that repo, even if the change was trivial. The monorepo gives you a global view that enables surgical precision. For instance, if you change a comment in a utility file that's only used by the back-end, the front-end CI pipeline won't even trigger. That's impossible to coordinate efficiently across separate repos.
So for a solo developer like Daniel, who's juggling front-end, back-end, and a store—this isn't just a big company tool. This is a workflow advantage even for one person.
One hundred percent. In fact, I'd argue the advantage is proportionally larger for a solo developer or a small team. At a big company, you have dedicated platform teams to manage the multi-repo integration pain. As a solo dev, you're the platform team. Every minute you spend coordinating versions, running redundant builds, or debugging an integration issue that stemmed from a misaligned dependency is a minute you're not building features. A monorepo with basic workspace tooling eliminates that overhead almost entirely. You stop being a release engineer and get back to being a developer.
Okay, you're selling me. But let me play devil's advocate. What's the downside? There's got to be a catch. Is it the Git history? Do you lose the clean, per-module commit log?
That's a common concern, and it's mostly a red herring. You can absolutely filter Git history by path. You can run git log -- path/to/my-module and see only the commits that affected that module. The history is all there; it's just in one repository instead of scattered across three. The real trade-off is initial complexity and tooling investment. Setting up a robust monorepo with Nx or Bazel has a steeper learning curve than just creating three separate repos. But the tools have gotten dramatically better. Nx twenty-point-oh, which dropped in January, has incredible project graph visualization. You can literally see your modules and their dependency arrows in a web interface. It makes the logical boundaries tangible.
So the cost is upfront learning, and the payoff is ongoing operational simplicity. That seems like a good trade. But what about the size? Doesn't a monorepo get huge and slow for Git operations?
Another fair point. A monorepo can get large. But modern Git has features like sparse checkout and partial clone that mitigate this. You can configure your local clone to only download the files for the modules you're actively working on. So, as a front-end developer, you don't need the entire history of the back-end database migrations on your laptop. The tooling around the monorepo, like Nx's affected commands, also ensures you're not doing unnecessary work. The trade-off is that you need to learn these Git features, whereas with multi-repos, each repo stays small by default. But again, you're trading a one-time learning cost for a permanent efficiency gain.
It's modularity by architecture, not by accident of file location. So for someone starting a new project today, with a front-end, back-end, and planning store, what's the practical takeaway? How do they get started without getting overwhelmed?
Start simple. Use pnpm workspaces. Create a root package.json, define your three directories as workspaces, and let pnpm handle the local linking. You'll immediately get the benefit of a single install command and cross-module dependency resolution. Don't jump straight to Bazel. As your project grows and builds get slower, add Turborepo or Nx. Turborepo is particularly gentle—it's basically a supercharged task runner that understands your workspace structure. It gives you caching and parallel execution with minimal configuration.
So the barrier to entry is much lower than people think. You don't need a thousand-engineer Google setup to benefit.
Not at all. The same principles that work for Google's massive monorepo scale down beautifully. The tooling has been democratized. For Daniel, or anyone listening who's in this situation, I'd say: prototype it. Take one of your existing multi-module projects, create a new monorepo, and move the code in. Spend an afternoon setting up pnpm workspaces. You'll feel the difference immediately when you can run a single command to install all dependencies, or make a change to a shared type definition and see it instantly available in both your front-end and back-end.
It's one of those things where you don't realize the cognitive load of the multi-repo dance until it's gone. Alright, let's bring it back to the future for a second. With AI-assisted coding tools becoming more prevalent, how does this play out? If I have an AI agent helping me code, does it care if my code is in one repo or three?
That's a brilliant question, and I think it's where this whole debate is heading. We've talked before about Agentic Repository Engineering—the idea that AI agents will work directly with codebases. A monorepo is a much simpler target for an agent. The agent has a single root to explore, a unified dependency graph to understand, and it can make cross-module changes in a single operation. In a multi-repo world, the agent either needs access to multiple repositories and the intelligence to coordinate changes across them, or you're stuck with agents that only understand one piece of the puzzle.
So the monorepo might become the default for AI-augmented development because it presents a coherent, unified context. The agent doesn't have to piece together the world from scattered fragments.
I think that's likely. The tooling we're building for human developers—explicit project graphs, hermetic builds, enforced boundaries—is exactly what an AI agent needs to operate effectively and safely. The monorepo isn't just a repository strategy; it's becoming the canonical representation of a project's architecture. And that's valuable for both carbon-based and silicon-based developers. Imagine an agent tasked with upgrading a major library version. In a monorepo, it can run a single codemod script across all affected modules, verify the build for the entire project graph in one go, and commit it as a single, reviewable change. That's a coherent task. In a multi-repo setup, that same upgrade becomes a series of disconnected, risky PRs across different repositories, each with its own review and merge cycle.
That's a powerful vision. It makes the monorepo not just a developer convenience, but a foundational piece for the next generation of software development tools.
Precisely. It's about creating a single source of truth for your project's structure, and that truth is machine-readable. Well, Daniel, I hope that answers your question. Monorepos are the furthest thing from a single folder blob. They're a disciplined, tool-enforced federation of modules.
Thanks as always to our producer Hilbert Flumingtop, and big thanks to Modal for providing the GPU credits that power this show. If you're enjoying the podcast, a quick review on your podcast app helps us reach new listeners. This has been My Weird Prompts. I'm Corn.
And I'm Herman Poppleberry. See you next time.