I was thinking about documentary film crews the other day, Herman. You know how in those nature shows, they follow a single leopard for six months just to prove it actually hunts at night? They don't just take the leopard's word for it that it's a predator. They need the footage. They need the infrared shots of the actual pounce. If the camera isn't rolling, for the sake of the documentary, it basically didn't happen.
That is a surprisingly apt way to start a conversation about data governance, Corn. I am Herman Poppleberry, by the way, for anyone joining us for the first time. And you have basically stumbled onto the core tension of modern compliance. It is the shift from just showing someone you have a lock on the door to proving that the door was actually locked every single night for a year. In the industry, we call that the shift from point-in-time snapshots to continuous assurance. And as of today, March twenty-fourth, twenty twenty-six, it is no longer an optional upgrade for high-end tech firms. It is the baseline for B-to-B trust.
Well, today's prompt from Daniel is about exactly that. He wants us to dig into the world of S-O-C two and formal data governance standards. Specifically, how the landscape has shifted here in March of twenty twenty-six toward this continuous model. Daniel's curious about the actual requirements for compliance observations and why the old way of doing things—the episodic audit—is essentially dead. He wants to know what the auditors are actually looking for when they peer into the digital darkness.
It really is a dead model. The episodic audit, that once-a-year fire drill where everyone scrambles to find screenshots from six months ago, is becoming a relic of a slower era. We are seeing a massive move toward real-time evidence collection. A Gartner Market Guide that came out just a few days ago, on March twentieth, twenty twenty-six, predicts that sixty-five percent of organizations will have compliance automation integrated directly into their D-E-V-O-P-S workflows by twenty twenty-eight. That is a staggering number when you consider that five years ago, compliance was mostly just a bunch of spreadsheets and a lot of praying that the auditor didn't look too closely at the off-boarding logs.
That is a huge shift in how engineering teams have to operate. It means compliance isn't a separate department anymore; it is a feature of the build pipeline. It is like having a tiny digital auditor sitting on the shoulder of every developer, watching every pull request. But before we get into the weeds of the automation, let's level-set on the standards themselves. Most people hear S-O-C two and their eyes glaze over. Break down the difference between a Type one and a Type two, because that seems to be where the documentary analogy really lives.
The distinction is vital for understanding why this is getting so much harder. An S-O-C two Type one is basically a snapshot. It is an audit of the design of your controls at a specific point in time. You show the auditor your policy, you show them the technical configuration of your cloud environment on that Tuesday morning, and they say, yes, this design meets the criteria. It is like showing a blueprint of a secure building. You are saying, look, we have a door, and here is the lock we bought for it.
But a Type two is the actual security footage of people walking through the lobby for months on end.
A Type two looks at the operational effectiveness of those controls over a period, usually three to twelve months. The auditor isn't just asking if you have a password policy; they are looking for evidence that every single person who was hired during that twelve-month window actually had a background check and a unique, complex password enforced from day one. If you have a gap for even a week, that is an exception in the report. In twenty twenty-six, the S-O-C two Type two has become the non-negotiable standard for any vendor moving data. If you don't have one, you aren't even getting through the initial security review for a major contract.
And that is why Daniel mentioned this idea of compliance observations. In an audit, an observation isn't just a casual look-around. It is a formal test. The auditor needs evidence artifacts. They ask for a population list, which might be a list of every single code change made in the last six months. Then they pick twenty-five random samples from that list and say, show me the pull request, show me the peer review, and show me the automated test results for these specific twenty-five changes. If you can't produce that evidence for sample number fourteen, you've got a problem.
The administrative burden of that is what created the market for the big automation platforms like Vanta, Drah-tah, and Secureframe. These are the big three that really popularized the idea of continuous monitoring. But what is interesting about twenty twenty-six is how the technology has evolved beyond just checking boxes. We have moved past simple A-P-I connectors that check if your S-three buckets are public. We are now in the era of what the industry is calling Agentic Compliance.
I saw that term popping up in the February reports. Agentic Compliance sounds like another buzzword, but there is a real technical mechanism behind it, isn't there? It sounds like it ties back to what we discussed in episode fourteen seventy-two about agentic observability.
It ties back perfectly. On February eighteenth, twenty twenty-six, we started seeing the first major marketing pushes for this. Platforms like Apptega and RegScale are using A-I agents to manage the entire audit lifecycle. These aren't just scripts; they are autonomous agents that can navigate your infrastructure, identify missing evidence, and even perform real-time risk evaluations. If a developer creates a new database instance that doesn't have encryption turned on, the agent doesn't just flag it in a dashboard. It can initiate a remediation workflow or even gather the evidence of why it was created and who authorized the exception. It is essentially an A-I that understands the intent of the compliance standard, not just the letter of the law.
But wait, if an A-I agent is doing the audit prep and gathering the evidence, does the human auditor actually trust it? I mean, if I am a C-P-A signing off on a report that says a company is secure, I would be pretty nervous if the only reason I think that is because a bot told me so. How does the human-in-the-loop factor work here?
You have hit on the central controversy of twenty twenty-six. There is a growing rift between traditional C-P-A firms and these automated-only compliance startups. Critics are calling it a quality crisis. They argue that push-button S-O-C two reports lack the professional skepticism that a human auditor brings. That is why you are seeing a big push for human-in-the-loop A-I models. The A-I does the heavy lifting of evidence collection, but a human still has to validate the logic and the samples. The A-I-C-P-A, which is the governing body for these standards, is very clear that the responsibility still rests with the human auditor. They have to verify that the A-I isn't just hallucinating a clean report.
It is about the chain of custody for the evidence. If the A-I is picking the samples, is it picking the ones it knows are clean? That is the concern. It reminds me of episode fourteen ninety-nine, where we talked about the Black Box Recorder and why A-I needs an active archive. If the audit process itself is automated, we need an auditable archive of what the A-I auditor was doing.
Precisely. You need to be able to audit the auditor. But let's look at the numbers because they explain why companies are willing to take that risk. The average cost of a data breach in the United States hit ten point twenty-two million dollars in twenty twenty-five. When you are looking at a ten-million-dollar risk, a fifty-thousand-dollar S-O-C two audit looks like a bargain, even if it is a headache. The market for compliance software is now valued at over forty billion dollars. People are throwing money at this because the alternative is catastrophic.
Let's talk about the direct costs for a second, because Daniel's prompt had some specific figures. A Type one might only cost you three to five thousand dollars in auditor fees, but a Type two can range from five thousand to over seventy-five thousand dollars once you factor in the automation tools and the hundreds of hours of engineering time. And the complexity is ballooning. A typical S-O-C two Type two now involves between sixty and one hundred and fifty different control points. According to twenty twenty-four benchmarks, nearly a quarter of all reports now exceed one hundred and fifty controls. Why the increase, Herman? Is it just the A-I-C-P-A being pedantic, or has the threat landscape actually changed that much?
It is a bit of both, but a lot of it is driven by international regulation. We are seeing more S-O-C two plus reports. These are audits where the standard Trust Services Criteria are mapped directly to other regulations like the E-U's N-I-S-two directive or DOOR-ah, the Digital Operational Resilience Act. As of March seventeenth, twenty twenty-six, these are in full effect. If you want to sell software to a bank in Europe, you need to prove resilience, not just security. You have to show that you can handle a systemic outage and stay operational. That adds a whole new layer of controls around disaster recovery and business continuity that weren't as strictly tested five years ago.
That N-I-S-two and DOOR-ah alignment is huge. It basically means that S-O-C two is becoming the foundation for a global compliance stack. But it also means that the observations are getting more technical. One thing that jumped out at me in Daniel's prompt was the focus on penetration testing for Common Criteria four point one. I thought S-O-C two was about policies and procedures, not necessarily hacking your own systems.
That is a major shift in auditor expectations that really solidified around February tenth, twenty twenty-six. Technically, the A-I-C-P-A doesn't explicitly mandate a third-party penetration test in the written standard for Common Criteria four point one, which covers COSO Principle thirteen about obtaining and using relevant, high-quality information. However, in practice, you can't satisfy that criteria without one anymore. Auditors are now looking specifically for remediation evidence. It is not enough to show a report from a pen testing firm that says you have three critical vulnerabilities. You have to show the evidence that those specific vulnerabilities were fixed, re-tested, and closed within the audit window.
That seems like a nightmare for a fast-moving startup. You find a bug in February, you fix it in March, but if you don't have the log of the re-test, the auditor marks it as a failure for the whole year?
Potentially. And this is where the documentary analogy gets painful. If you are doing continuous assurance, the auditor has access to your systems throughout the year. They aren't just showing up in December. They are seeing the vulnerabilities pop up in real-time. This is why Continuous Controls Monitoring, or C-C-M, is the goal for twenty twenty-six. You want a system that is constantly pinging your A-P-Is to verify M-F-A enforcement, checking your cloud configurations, and performing quarterly access reviews automatically. RegScale is one of the companies Gartner highlighted for this. They want to move the audit from a document to a data stream.
I want to go back to the human element for a second. We talked about this idea of the quality crisis. If I am a buyer, and a vendor hands me an S-O-C two report that was generated by one of the big three automation platforms, how do I know if it is actually any good? Are there red flags I should be looking for in the report itself? Because I imagine some of these reports are just being rubber-stamped.
You should look at the description of the tests performed by the auditor. If every test says, observed that the system shows the control is active, that is a red flag. That means the auditor just looked at the dashboard of the automation tool. You want to see evidence of independent testing. Did the auditor actually look at the code? Did they verify the population list themselves, or did they just accept the list the software gave them? We need to see that professional skepticism. If the report doesn't mention any exceptions or any remediation, and the company is a thousand-person engineering org, I would be suspicious. Nobody is that perfect.
It feels like we are moving toward a world where the audit is just a continuous stream of data. But that creates a massive amount of noise. How do companies handle audit fatigue when they are trying to comply with S-O-C two, I-S-O twenty-seven thousand and one, and maybe H-I-P-A-A or G-D-P-R all at once? It feels like the engineering team would spend half their time just feeding the beast.
That is where the Dual-Track strategy comes in. High-growth firms are now mapping their controls across multiple standards simultaneously. There is about a sixty to seventy percent overlap between S-O-C two and I-S-O twenty-seven thousand and one. If you can collect one piece of evidence that satisfies both, you save a massive amount of time. The automation platforms are getting much better at this cross-mapping, but it still requires a very disciplined approach to how you tag your evidence. You have to think of your evidence as a library that can be queried by different auditors for different purposes.
Let's talk about some specific technical observations. Daniel mentioned population lists and sample evidence. For a listener who is an engineer, what does that actually look like in practice? Give me a concrete example of a control and the evidence an auditor would expect to see in twenty twenty-six.
Let's take the least-privilege access control. The requirement is that users only have access to the data and systems necessary for their job. In a Type two audit, the auditor will ask for a list of every person who had access to your production database during the audit period. That is your population. They will then pick twenty-five people from that list—that is the standard sample size for many tests—and say, show me the ticket where this access was requested, show me the approval from their manager, and show me the record of the quarterly access review where their permission was re-validated.
And if one of those twenty-five people was a developer who left the company in June, but their access wasn't revoked until July?
That is a finding. If it happens once, it might just be a note in the report. If it happens five times, the auditor might conclude that your control over off-boarding is not operationally effective. And in twenty twenty-six, they are looking for A-P-I logs to prove this. They aren't just looking at a spreadsheet you made. They want to see the logs from Okta or Azure A-D that show the timestamp of the account deactivation versus the timestamp of the termination in your H-R system. They are looking for the delta between those two timestamps. If it's more than twenty-four hours, you'd better have a good reason.
This is where it connects back to agentic observability. If you have agents monitoring your logs for performance and security, those same agents should be feeding your compliance engine. It is the same data, just viewed through a different lens. One lens is looking for a crash, the other is looking for a policy violation.
It is exactly the same data. Compliance is essentially just security with a better filing system. But the shift toward real-time means the filing system has to be automated. This is why names like Daniel Betts and George Spafford at Gartner are pushing this idea of D-E-V-O-P-S Continuous Compliance. You can't have a two-week sprint cycle and a one-year audit cycle. They are fundamentally incompatible. The audit has to move at the speed of the code. If you are deploying fifty times a day, you need fifty mini-audits a day.
It sounds like the role of the compliance officer is changing from a policy writer to a systems architect. You aren't writing a document that says we use M-F-A; you are building a system that prevents anyone from logging in without it and automatically generates a cryptographically signed log of that event for the auditor.
That is the ideal state. But we are also seeing some pushback. There is a concern that we are over-automating the easy stuff and ignoring the hard stuff. It is easy to automate a check for an encrypted disk. It is very hard to automate a check for whether a company's culture actually prioritizes security or if people are just finding workarounds for the automated blocks. A human auditor can walk into an office or join a Slack channel and sense the culture. An A-I agent can't do that yet. It can't tell if the developers are complaining about security controls and finding ways to bypass them.
I don't know, Herman. Some of the Slack channels I have seen, an A-I might be the only thing that could make sense of the culture. But I take your point. There is a qualitative aspect to governance that a checklist doesn't capture. It's the difference between having a rule and having a value.
There really is. And that is why the documentary analogy is so good. You can have all the right equipment, but if the film crew doesn't know where to point the camera, they are going to miss the actual story. In a data governance context, the story is how data flows through your organization and who is actually responsible for it. The observations are just the scenes we capture to tell that story.
So, for the folks listening who are staring down an audit in the next six months, what are the practical takeaways? Because this sounds like it could be a full-time job for an entire team if you don't have the right strategy.
First, you have to implement Continuous Controls Monitoring as early as possible. Do not wait for the audit window to start. If you use a platform like Vanta or Drah-tah, get it connected to your systems now so you can see your gaps in real-time. It is much easier to fix a configuration error today than it is to explain to an auditor in six months why it was broken for three weeks. You want to be in a state of perpetual readiness.
And what about the evidence itself? Any tips on making that less painful for the engineers who actually have to produce it?
Prioritize remediation evidence. This is the biggest trap companies fall into. They run a vulnerability scan, they see the bugs, they fix them, but they don't document the fix in a way that an auditor can verify. Every time you close a security-related ticket, there should be a link to the commit and a screenshot or a log of the re-test. If you don't have that link, the evidence doesn't exist in the eyes of the auditor. You need to close the loop.
It is the leopard hunting in the dark. If the camera wasn't rolling, it didn't happen.
That is it. And finally, if you are planning to expand internationally, look at that Dual-Track strategy. Mapping your S-O-C two controls to I-S-O twenty-seven thousand and one or N-I-S-two early on will save you hundreds of hours of redundant work later. Compliance is a competitive advantage in twenty twenty-six. If you can prove you are secure faster and more reliably than your competitors, you win the contract. It's no longer just a hurdle; it's a sales tool.
It is a bit of a paradox, isn't it? We spent years trying to make software development faster and more agile, and now we are adding this massive layer of observation and evidence collection. But I guess that is the price of moving ten million dollars worth of data around. We need the speed, but we also need the accountability.
It is. The stakes are just too high for the pinky-promise era of security to continue. We need the documentary footage. We need to know that the doors are actually being locked, and we need to know it every single day, not just on the day the auditor shows up.
What I find interesting is where this goes next. If the A-I agents get good enough, do we even need the S-O-C two report? Could we just give our customers a real-time transparency dashboard where they can see our compliance posture in any given second? Like a public health inspection grade that updates every minute.
That is the dream for a lot of C-I-S-Os, but I think the legal and insurance industries will stay the hand on that for a while. They want a signed report from a person they can sue if things go wrong. A dashboard doesn't have professional liability insurance. But we are definitely moving toward a world where the audit prep is invisible because it is just part of the infrastructure. Compliance as code is the final destination.
Well, this has been a deep dive, Herman. I think I have a much better handle on why Daniel was asking about these observations. It is not just about the rules; it is about the proof. It's about building a system that is inherently auditable.
It is. And as the technology gets more complex, the proof has to get more sophisticated. It is a fascinating intersection of law, technology, and trust. We are building the digital equivalent of a high-security vault, and the S-O-C two report is the certificate that says the vault actually works.
It really is. We should probably wrap it up there before we start talking about the history of the A-I-C-P-A, which I know you have a five-volume set on somewhere.
Only five volumes, Corn. Let's not exaggerate. It's a light weekend read.
Thanks as always to our producer Hilbert Flumingtop for keeping the gears turning behind the scenes. And a big thanks to Modal for providing the G-P-U credits that power the generation of this show.
If you found this useful, we would love it if you could leave us a review on whatever podcast app you are using. It really does help other people find the show and join the conversation about these weird prompts.
This has been My Weird Prompts. You can find us at myweirdprompts dot com for the full archive and all the ways to subscribe.
We'll see you in the next one.
Later.