Eighteen minutes. That is less time than it takes me to decide what to watch on Netflix, and significantly less time than it takes for me to actually get off the couch and find the remote.
It is staggering, Corn. I am Herman Poppleberry, and I have been staring at these figures from the Mandiant M-Trends twenty twenty-six report all morning. We are talking about breakout time, the window between an attacker first gaining access to a system and the moment they start moving laterally to find the real prizes. It has dropped from forty-eight minutes just two years ago to a median of eighteen minutes today.
Today’s prompt from Daniel is about the doctrine of defense in depth, and honestly, if the window of opportunity is only eighteen minutes, I am starting to think the depth of our defense is about as thick as a coat of paint. Daniel wants us to look at where this strategy came from, how it is evolving with agentic artificial intelligence, and whether the old ways of just backing things up are enough anymore.
It is the perfect time to talk about this because we just saw Kevin Mandia and Alex Stamos at the RSA Conference earlier today, and their warning was pretty dire. They are saying that artificial intelligence is now discovering vulnerabilities faster than human teams can possibly patch them. The static perimeter, that old idea of the medieval castle with a single big wall, is essentially dead.
The castle analogy always felt a bit romantic to me. You have the moat, the drawbridge, the boiling oil. It sounds great until someone flies over the wall in a helicopter. Or in this case, until an AI agent finds a hole in the basement window that no one knew existed. But this whole idea of layering defenses is not actually a tech invention, is it?
Not at all. It actually goes back to the Roman Empire around two hundred AD. Before that, the Romans relied on forward defense, basically trying to stop every single barbarian at the literal edge of the border. But as the empire grew and the threats became more mobile, they realized they could not hold a hard line forever. So they shifted to a layered approach. They built fortified towers and signal networks deeper into their territory. The goal was not to stop the invasion at the gate, but to exhaust the attackers as they penetrated deeper, slowing them down until the main legions could arrive.
So it was a strategy of friction. You make it so annoying and costly to move forward that the attacker eventually gives up or gets caught. Then the National Security Agency takes that in the nineteen nineties and says, hey, let's do that with servers.
That is exactly the lineage. The NSA adapted it into what we call defense in depth today. But for a long time, the industry got lazy. We went back to that castle model because it was easier to sell. You buy one giant firewall, you feel safe, and you call it a day. But as we saw on March sixth, when the White House released President Trump’s Cyber Strategy for America, the government is officially moving away from that checkbox compliance mentality. The new five-page strategy is all about measurable resilience and something they call defending forward.
Defending forward. That sounds like a polite way of saying we are going to hack them before they hack us.
It is a shift toward offensive capabilities being part of the defensive conversation. But more importantly, it acknowledges that you have to assume the breach. You have to assume the attacker is already in the eighteen-minute window and build your systems to be resilient enough to survive it.
Which brings us to Chintan Gurjar. He just released this Defense-in-Depth twenty twenty-six framework, and it is a beast. He is not talking about three or four layers anymore. He has mapped out ten layers with one hundred and one subdivisions. Herman, who has time for a hundred and one subdivisions? My to-do list has three things on it and I am already overwhelmed.
It sounds like a lot, but Gurjar’s point is that the threat landscape has changed because of agentic AI. We are not just fighting human hackers who need to sleep and eat. We are fighting autonomous agents that can probe thousands of vectors a second. Layering is the only way to create enough speed bumps to break the automation.
One of the things that caught my eye in that framework was how it handles code. There was a study from AppSec Santa earlier this month that found twenty-five point one percent of AI-generated code samples contain at least one security vulnerability. We are basically using AI to write code that has a one-in-four chance of being a back door.
And that is the paradox. We are using AI to speed up development, but we are also accidentally automating the creation of vulnerabilities. If a developer is using a coding assistant and does not have a layered defense to catch those bugs, they are just handing the eighteen-minute breakout time to the attacker on a silver platter.
So if the perimeter is dead and the code is buggy, what are the layers that actually matter in twenty twenty-six? Because I feel like we have been told for years that if we just have a good backup, we are fine. But Daniel’s prompt mentions that the debate has shifted from backups being insurance to resilience being a requirement.
The backup situation has become a total nightmare. SentinelOne just released data showing that fifty percent of cyberattacks this year are bypassing backups entirely. They are not even bothering to encrypt your data anymore. They just steal it and threaten to leak it. If you have a backup of the data they stole, great, you can restore your system, but they still have the data. The extortion is the attack, not the encryption.
That is like someone stealing your diary and you saying, it is okay, I have a photocopy. It does not matter that you have a photocopy. They are still going to read your secrets to the whole school.
That is the backup paradox. And even when they do encrypt, modern attackers are now systematically targeting the backup infrastructure first. They spend their eighteen-minute breakout time looking for your backup server, wiping it, and then hitting the production data. GitProtect found that fifty-seven percent of organizations still rely on a single security layer for their cloud backups. In a world where forty-five percent of breaches are happening in the cloud, that is just negligence.
So the old three-two-one rule is toast? Three copies, two different media, one offsite?
It is the baseline, but the new standard people are pushing is the three-two-one-one-zero rule. You still have three copies on two media with one offsite, but you add one copy that is immutable, meaning it literally cannot be changed or deleted for a set period, and zero recovery errors. That zero part is the kicker. You have to have automated testing that proves the backup actually works. Most people do not find out their backups are corrupted until they are already in the middle of a crisis.
I love the idea of zero errors. It is a very donkey goal, Herman. Very ambitious. But let's talk about the layers that happen before the backup. If we want to stop that eighteen-minute lateral movement, what is the best tool? Is it micro-segmentation?
Micro-segmentation is probably the single most effective way to stop the finale of an attack. Think of it like a submarine. If one compartment floods, you seal the doors so the whole ship doesn't sink. In a traditional network, once you are in, you can talk to everything. With micro-segmentation, the web server can only talk to the database, and the database can only talk to the reporting tool. If the web server gets hacked, the attacker is stuck in a tiny digital box. They can't get to the mission-critical data because there is no path for them to follow.
It sounds like a lot of work for a developer to set up. We are always talking about shifting left, getting security into the hands of the developers early. But Daniel mentioned this idea of shifting smart instead.
Shifting smart is the evolution of shift left. Instead of just dumping a bunch of security tools on a developer and telling them to fix everything, you use things like Production Bill of Materials, or PBOMs. We have talked about Software Bill of Materials before, which is just a list of ingredients in your software. A PBOM goes further. It tracks how that software is actually running in production, who it is talking to, and what permissions it has.
And then you combine that with Policy-as-Code, right? Like Open Policy Agent?
You write your security rules in code. If a developer tries to deploy something that violates the policy, the system just says no. It is automated enforcement. It takes the human error out of the equation. This is a huge part of what we discussed in episode fourteen seventy-four regarding non-human identities. When you have AI agents and automated scripts running your infrastructure, you can't rely on a guy named Gary to check the settings. You need the system to defend itself.
Poor Gary. He is always the one we blame. But let's bring this down to a more personal level for a second. Most of our listeners aren't managing ten-layer defense frameworks for global corporations. They are just trying to make sure their bank account doesn't get drained by a bot in some far-off country. Daniel asked about personal security applications of defense in depth. What does that look like for a normal person in twenty twenty-six?
It is the same philosophy: don't have a single point of failure. The first layer is identity hardening. Most people use the same email address for everything. That email address is your primary identity. If I know your email, I am halfway to hacking you. Shifting smart at home means using email aliases. You have one alias for your bank, one for your social media, and one for your random shopping. If the shopping site gets breached, your bank login is still hidden.
I have started doing that, and it is actually satisfying to see a phishing email go to an alias I only used once for a free pizza. It is like a little early warning system. What about hardware?
FIDO2 and Passkeys. This is the biggest jump in personal security we have seen in a decade. Traditional multi-factor authentication, where they text you a code, is incredibly vulnerable to SIM swapping or just basic phishing. But Passkeys are phishing-resistant. They require a physical device or a biometric check that is tied to the specific website you are visiting. It blocks something like ninety-nine point nine percent of account takeovers. If you are not using Passkeys yet, you are basically leaving your front door wide open and hoping no one walks by.
And then there is the home network itself. I saw Daniel mention WPA3 encryption. Is that really that much better than the old stuff?
It is significantly more resistant to offline dictionary attacks. In the old days, someone could sit in a car outside your house, capture a little bit of data from your Wi-Fi, and then spend all night trying to crack your password on their own computer. WPA3 makes that much harder. But again, it is just one layer. If you have WPA3 but you haven't changed the default password on your router, you still have a hole in your defense.
It really does come back to that Roman strategy, doesn't it? It is not about being unhackable. It is about being so annoying to hack that the attacker moves on to someone easier. The cost of cybercrime is forecasted to pass ten point five trillion dollars globally this year. That is a lot of incentive for the bad guys.
It is an economy of scale. The attackers are using AI to lower their costs. They can launch a million attacks for the price of one. If your defense is static, you are losing money every second. But if your defense is layered and automated, you are forcing them to spend more resources to get through each layer. You are trying to break their profit margin.
I think about that eighteen-minute window again. If I am an attacker and I get into a network, and the first thing I hit is a micro-segmented wall, and then I try to find a password but everything is using non-human identity management like we talked about in episode fourteen seventy-four, and then I finally find the data but it is an immutable backup... I am going to be pretty frustrated.
And that frustration is the goal. We often talk about security as this binary state, you are either safe or you are not. But defense in depth teaches us that security is actually a measure of time and effort. You want to make the time and effort required to steal your data exceed the value of the data itself.
It is a bit like my strategy for avoiding chores. I make it so complicated for Herman to ask me to do the dishes that he eventually just does them himself. I have layers of excuses, a perimeter of feigned sleep, and an immutable backup of "I did them last time."
I can confirm that Corn’s defense in depth against household labor is world-class. It is truly impenetrable. But on a serious note, this shift toward resilience is the only way forward. We saw this in episode seven hundred and seventy-one when we talked about high-stakes redundancy. When you are dealing with critical systems, you can't just hope for the best. You have to build for the worst.
So, for the developers listening, the takeaway is to move beyond just scanning your code. You need those PBOMs and you need Policy-as-Code to handle the agentic threats. And for everyone else, get your Passkeys set up and start using email aliases.
And don't forget the three-two-one-one-zero rule for your data. Immutability is the only thing that stops the extortionists from wiping your safety net. If you can't delete it, they can't delete it.
It is wild to think that a strategy used by Roman soldiers in two hundred AD is still the most effective way to protect a cloud server in twenty twenty-six. Some things never change. People are always going to try to take what isn't theirs, and we are always going to have to build better walls, or in this case, a hundred and one subdivisions of walls.
What I find fascinating is the human element in all of this. We talk about AI agents and automated policies, but at the end of the day, someone has to decide what is worth protecting. Kevin Mandia made a great point at RSA: we are in an arms race where the weapons are getting smarter, but the strategy is still fundamentally about human willpower and preparation.
Well, my willpower is currently focused on whether or not we have any snacks left in the kitchen. But before we go, we should probably mention that if you want to dive deeper into how these attacks actually play out, check out episode twelve thirty. We talked about how hackers can live in an account for two hundred days before they are even detected. It really puts that eighteen-minute breakout time into perspective. If they move that fast once they are in, imagine what they can do with two hundred days of quiet access.
It is a sobering thought. But that is why we do this. The more you know about the mechanisms, the better you can build your own layers.
We have covered a lot of ground today, from the Roman signal towers to the White House’s new cyber strategy. It is clear that the old castle is a ruin, and we are all living in submarines now. Just make sure your hatches are sealed.
And that your backups are immutable.
Always with the backups, Herman. You are like a broken record, but a record that is very well-protected and has zero recovery errors.
I will take that as a compliment.
It was. Mostly. We should wrap this up before I start coming up with more analogies. We promised Daniel we would stay focused on the ideas, and I think we have hit the core of it. Defense in depth isn't just a technical setup; it is a mindset of constant, layered friction.
A mindset that is more necessary now than ever. With ten point five trillion dollars on the line this year, the friction you create today could be the only thing that saves your data tomorrow.
Well said, Donkey. Well said.
Thank you, Sloth. I think we have given people plenty to think about for their next eighteen minutes.
Hopefully they spend at least one of those minutes setting up a Passkey.
One can dream.
Alright, that is our deep dive into the evolving world of defense in depth. Big thanks to Daniel for the prompt. It is always good to look at the foundations, especially when the building is shaking.
It certainly is. This has been an enlightening session, Corn. I always enjoy peeling back the layers with you.
Even the hundred and one subdivisions?
Especially those.
You are a nerd, Herman Poppleberry. But you are our nerd.
Guilty as charged.
We should probably get out of here before the breakout time expires on my patience for technical frameworks.
Fair enough. Let's head out.
Thanks as always to our producer Hilbert Flumingtop for keeping the show running smoothly behind the scenes. And a big thanks to Modal for providing the GPU credits that power this show and help us stay on top of all these AI developments.
We really couldn't do it without them.
This has been My Weird Prompts. If you are finding these deep dives useful, leave us a review on your favorite podcast app. It really helps the show find its way into more ears.
We appreciate the support.
You can find us at myweirdprompts dot com for our full archive and all the ways to subscribe. We will be back soon with another prompt from Daniel.
See you then.
Goodbye, everyone.
Goodbye.