#2805: The Subprocessor Notification Nobody Reads

Why do companies send subprocessor update emails nobody reads? It's transparency theater — with a hidden purpose.

Featuring
Listen
0:00
0:00
Episode Details
Episode ID
MWP-2974
Published
Duration
29:44
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Every email user has seen them: subprocessor notification updates that land in your inbox with all the enthusiasm of a terms-of-service popup. A study from the University of Michigan found that over 90% of people never read privacy policies or terms of service — and subprocessor notifications are even further down the attention hierarchy. They're the basement of the basement.

Under GDPR Article 28, a subprocessor is what you get when a data processor subcontracts work to another company. If you use a project management app hosted on AWS, and AWS brings in a specialized analytics firm, that firm is the subprocessor. The regulation requires controllers to notify users when subprocessors change — but the notification typically gives you nothing but a company name and a vague service description. The actual contracts between controllers, processors, and subprocessors are almost always confidential.

The real function of these notifications isn't for individual users. It's for competitors, privacy activists, regulators, and journalists — people with the time and expertise to scrutinize the public record. When Schrems II invalidated the Privacy Shield framework, subprocessor lists suddenly forced companies to examine their data supply chains for the first time. The notification requirement creates a discoverable record that enables collective action, even if the individual email is functionally meaningless.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2805: The Subprocessor Notification Nobody Reads

Corn
Daniel sent us this one — he's been getting those emails about subprocessor list updates, the ones that land in your inbox with all the enthusiasm of a terms-of-service popup, and he's asking what the point actually is. If a service you're paying for changes a subprocessor, and you wanted to actually investigate what that means for your data, you'd probably hit a wall — neither the company nor the subprocessor is likely to tell you anything useful. So the question is: what did the regulation that mandated these notifications actually hope to achieve? What's a subprocessor in the eyes of the main data protection laws? And has anyone, ever, in the history of email, actually done something because of one of these notifications?
Herman
The short answer to that last part is: almost nobody. And that's not me being cynical — there's actual research on this. A study out of the University of Michigan looked at privacy policy and terms-of-service engagement and found that something like ninety-plus percent of people never read them, and subprocessor notifications are even further down the attention hierarchy. They're the basement of the basement.
Corn
The basement of the basement. So we're dealing with a regulatory requirement that generates emails nobody reads, about changes nobody can investigate, from companies that don't want to tell you anything anyway. It's the paperwork equivalent of a fire drill in a swimming pool.
Herman
Yet I think there's something genuinely interesting here once you dig past the surface-level absurdity. Because the subprocessor notification isn't really about you, the individual user, taking action. It's about creating a paper trail that makes the invisible visible — or at least visible enough that someone, somewhere, might notice something.
Corn
The "someone somewhere" theory of regulation. I like it. But let's back up — what actually is a subprocessor? Because the term sounds like someone who processes things sub-optimally.
Herman
A subprocessor is straightforward once you strip the jargon. Under GDPR — which is the regulation most of these notification emails trace back to — Article 28 lays this out. You've got a data controller, which is the entity that decides why and how personal data is processed. That's the company you're actually doing business with. They might hire a data processor, which is a third party that processes data on the controller's behalf, following the controller's instructions. A subprocessor is what you get when that processor then brings in another company underneath them — so it's subcontracting all the way down.
Corn
If I'm using, say, a project management app, and they host everything on Amazon Web Services, AWS is the processor. And if AWS then uses some specialized data analytics firm to handle logs, that firm is the subprocessor.
Herman
And here's where it gets interesting: under GDPR Article 28, paragraph 2, the processor can't bring in a subprocessor without prior specific or general written authorization from the controller. And paragraph 4 says that if a subprocessor is brought in, the same data protection obligations that apply to the processor must be imposed on the subprocessor via a contract. And if that subprocessor fails, the processor remains fully liable to the controller.
Corn
There's a chain of liability. The controller is on the hook to you, the processor is on the hook to the controller, and the subprocessor is on the hook to the processor. But you, the user, have no direct relationship with the subprocessor at all.
Herman
You can't sue the subprocessor. You probably can't even get them on the phone. And the notification email you receive — that's the controller telling you, "Hey, we've authorized a new subprocessor," or "We've changed subprocessors." And the email typically gives you the name of the company, maybe a vague description of what they do, and that's about it.
Corn
"We've added DataCrunchr Pro LLC to our subprocessor list. They assist with cloud-based data optimization services." Which means nothing. It could mean they're handling encryption keys, or it could mean they're running analytics on user behavior, or it could mean they're just providing server rack space in Iowa. You have no way to know.
Herman
That opacity is by design — or at least, it's the natural result of how these contracts work. The agreements between controllers, processors, and subprocessors are almost always confidential. They contain commercial terms, security details, infrastructure specifics that companies consider proprietary. I tried to find examples of these contracts in the wild and they're basically nonexistent. What you get instead are Data Processing Agreements, or DPAs, which are the standardized documents that outline the terms — but even those rarely give you the granular detail you'd want if you were trying to assess risk.
Corn
The regulation creates a right to be informed, but the information you receive is functionally meaningless. It's like being told there's a new ingredient in your food, but the ingredient is listed as "substance approved for ingestion purposes.
Herman
That's the tension. And it gets worse when you look at how these notifications actually work in practice. Most companies don't send you a personalized email for each subprocessor change. They maintain a public list — a subprocessor page on their website — and when it changes, they might email you to say "we've updated our subprocessor list." Sometimes they don't even do that. They just update the page and rely on the fact that their privacy policy says "check back for updates.
Corn
Which nobody does. Nobody is checking the subprocessor page of their project management app on a Tuesday afternoon.
Herman
Even if you did, what then? Let's say you see that your email provider has added a new subprocessor called — I don't know — AnalyticsCorp, and you get a weird feeling about it. What's your actual recourse? You could read the DPA if it's published, but it almost certainly won't tell you what specific data flows to that subprocessor, for what purpose, under what safeguards, with what retention period. You could contact the company and ask, and they'll send you a canned response about their "rigorous vendor assessment process" that tells you nothing.
Corn
The only real action available to the individual user is to stop using the service entirely. Which, for something like an email provider or a cloud storage platform or a payment processor, is a massively disruptive decision to make based on a notification that gave you no real information.
Herman
That's exactly what the critics of these regulations point to. The notification requirement creates the appearance of transparency without the substance. It's transparency theater.
Corn
So we've got the musical equivalent of beige wallpaper, but for compliance. The subprocessor notification is the Muzak of accountability.
Herman
Yet — and I want to push back on my own cynicism here — there is a real function these notifications serve. It's just not the one that most people assume.
Corn
Okay, make the case. What's the real function?
Herman
The real audience for subprocessor notifications isn't the individual user. It's competitors, privacy activists, regulators, and journalists. The notification requirement creates a public record that can be scrutinized by people who actually have the time, expertise, and motivation to dig in.
Corn
The individual notification is bait for institutional watchdogs.
Herman
Think about what happens when a company like — picking a non-random example — a major social media platform adds a subprocessor that's based in a country with questionable data protection standards. A privacy advocacy group notices. They start asking questions. They file freedom of information requests or regulatory complaints. They publish reports. The notification requirement gives them the thread to pull on.
Corn
Which means the regulation isn't designed around you, the user, reading the email and taking action. It's designed around creating a discoverable record that enables collective action.
Herman
That's actually a coherent theory of regulation. It's the same principle behind a lot of environmental disclosure laws. The idea isn't that every citizen reads a factory's emissions report and changes their behavior. The idea is that the report exists, and someone — an environmental group, a journalist, a regulator — can use it to hold the factory accountable.
Corn
The "someone somewhere" theory, as you said. But there's a difference: emissions data is quantitative and comparable. You can look at a factory's emissions report and say "this is above the legal limit" or "this increased twenty percent year over year." Subprocessor notifications give you a company name and a vague service description. There's nothing to compare, nothing to measure.
Herman
That's a fair critique, and it gets at something important about how these regulations were drafted. GDPR Article 28 was written with a focus on contractual obligations between controllers and processors, not on public transparency. The notification requirement for users — which mostly comes from Article 13 and 14, the transparency obligations — sits awkwardly alongside the commercial confidentiality of processor agreements. You end up with a requirement to disclose something while also being allowed to keep most of the meaningful details confidential.
Corn
The regulation itself is internally conflicted. It wants transparency and confidentiality at the same time.
Herman
And different data protection regimes handle this differently. GDPR is the most prescriptive. The California Consumer Privacy Act, or CCPA, as amended by the CPRA, takes a somewhat different approach — it focuses more on giving consumers the right to know what categories of personal information are shared and with whom, and the right to opt out of sales or sharing, rather than mandating subprocessor-level notification. Brazil's LGPD broadly follows the GDPR model. China's Personal Information Protection Law has its own framework that's more state-centric.
Corn
All of them are trying to solve the same fundamental problem: once your data leaves the company you gave it to, you lose visibility. The subprocessor concept is an attempt to extend the chain of accountability beyond the first link.
Herman
Here's the thing about chains — they're only as strong as the enforcement at each link. And the enforcement mechanism for subprocessor compliance is almost entirely contractual. The controller is supposed to vet the processor, the processor is supposed to vet the subprocessor, and everyone's supposed to have contracts in place that impose equivalent data protection obligations. But the user has no way to verify any of this.
Corn
It's a system built on trust in a domain where trust has been repeatedly violated. "Don't worry, we checked them out, and they promised to be good" is not exactly a reassuring assurance.
Herman
Yet, I'd argue the system does produce some meaningful outcomes, just indirectly. Let me give you an example. When Schrems II happened — the 2020 European Court of Justice decision that invalidated the Privacy Shield framework for EU-US data transfers — subprocessor lists suddenly became a lot more interesting. Companies had to disclose whether their subprocessors were US-based and what transfer mechanisms were in place. That disclosure requirement forced a whole wave of companies to actually examine their data supply chains for the first time.
Corn
Because they couldn't disclose what they didn't know.
Herman
The notification requirement forced internal discovery. Companies that had been casually handing data to subprocessors without much thought suddenly had to catalog who had what, where they were located, and what safeguards existed. That internal exercise probably did more for data protection than any individual notification email ever will.
Corn
The regulation works not by informing users, but by forcing companies to inform themselves.
Herman
That's one of the best defenses of these requirements I've seen. It's a self-auditing mechanism disguised as a transparency requirement.
Corn
Which makes me wonder: is there any evidence that users actually take action based on these notifications? You mentioned the Michigan study about nobody reading privacy policies. Is there anything specific to subprocessor notifications?
Herman
The honest answer is that the data is sparse, but what exists is not encouraging. Most companies report that the response rate to subprocessor update notifications is effectively zero. Not low — zero. No replies, no questions, no account deletions attributable to the notification.
Corn
We have a regulatory regime that generates emails nobody reads, about changes nobody investigates, from companies that learned more from the process of creating the notification than any user ever learned from receiving it. That's almost poetic.
Herman
But I'd add one more layer: even the zero-response-rate outcome isn't necessarily a failure of the regulation, if you measure success differently. The existence of the requirement changes company behavior. Companies know they might have to disclose subprocessor changes, so they're more careful about which subprocessors they engage. They know that a particularly controversial subprocessor addition could trigger scrutiny from privacy advocates or regulators. The notification requirement acts as a deterrent, not by informing users, but by creating potential consequences for bad choices.
Corn
The panopticon of potential publicity.
Herman
You don't need anyone to actually read the notification for the notification requirement to shape behavior. You just need companies to believe that someone might.
Corn
That only works if there actually are some someones out there doing the reading. If the scrutiny never materializes, the deterrent evaporates.
Herman
That's where the privacy advocacy ecosystem comes in. Organizations like NOYB — None of Your Business, Max Schrems' group — they do actively monitor these disclosures. They file complaints. They've brought major enforcement actions. The subprocessor notification is raw material for their work.
Corn
The individual user is basically a bystander in a system designed for institutional actors. Which is fine, except that the regulation is framed as being about individual rights. The email lands in your inbox addressed to you, implying you should do something. And you can't.
Herman
That's the user experience failure at the heart of this. The notification is written as if it empowers you, but it doesn't. It's a notification that says "here's a thing that happened" without giving you the information you'd need to evaluate whether the thing matters, or any meaningful options for responding to it.
Corn
It's the compliance equivalent of a fire alarm that goes off and then immediately says "please disregard this alarm; testing only" — except it never actually says that, it just gives you no way to find the fire.
Herman
I think that's what Daniel is reacting to in the prompt — the gap between the formal seriousness of the notification and the practical uselessness of it. You get an email that uses words like "subprocessor" and "data processing agreement" and "standard contractual clauses," which sounds important, but when you try to do something with it, you realize you're holding a receipt for a transaction you can't see, can't audit, and can't reverse.
Corn
Let's get concrete. If someone — not an institutional actor, just a reasonably motivated individual — wanted to act on a subprocessor notification, what could they actually do? What's the best-case scenario for individual action?
Herman
The most practical thing is probably to look at the jurisdiction. Where is the subprocessor based? If your data is being processed by a company in a country with weak data protection laws or a history of government surveillance, that's a red flag you can actually identify from the notification alone. You might not know what they're doing with your data, but you know where they are.
Corn
Geography as a rough proxy for risk.
Herman
If you see a subprocessor based in a country that's known for lax enforcement or intrusive surveillance, that's actionable information. You might decide to stop using the service, or at least to limit what data you put into it.
Herman
You can look at the subprocessor's own privacy policy and terms of service. This is tedious and most people won't do it, but it can be revealing. If the subprocessor's privacy policy says they can use data for their own purposes — product improvement, analytics, whatever — that's a problem, because under GDPR they shouldn't be doing that if they're acting as a pure subprocessor.
Corn
You're looking for contradictions between what the controller says and what the subprocessor's own documentation says.
Herman
And occasionally you find them. There have been cases where a company's subprocessor list described a vendor as only providing infrastructure, but the vendor's own website talked about their AI training data services. That kind of discrepancy is exactly the kind of thing that privacy advocates look for.
Corn
For the average person who isn't going to cross-reference privacy policies?
Herman
Honestly, the average person's best move is probably to do nothing with the individual notification, but to factor it into their overall assessment of the service. If a company is frequently changing subprocessors, or using a large number of them, or using subprocessors in jurisdictions you're uncomfortable with, that's a pattern worth noting. It's not about acting on one email; it's about building a mental model of how the company handles data.
Corn
Which is a very reasonable approach that almost nobody will actually take, because it requires sustained attention over time to something that's inherently boring.
Herman
Boring and low-information. It's a bad combination. You're asking people to pay ongoing attention to something that gives them almost no useful signal.
Corn
Where does this leave us? The regulation creates a notification requirement. The notifications are functionally unactionable for individuals. The real beneficiaries are institutional watchdogs. The main practical effect is that companies have to audit their own data supply chains. And the user experience is an inbox full of emails that feel important but aren't.
Herman
I think that's a fair summary. But I'd add one thing: the fact that the system works imperfectly doesn't mean it doesn't work at all. Before these regulations existed, companies could change subprocessors without telling anyone. Data could flow to anywhere, to anyone, and users had no visibility whatsoever. The current system is deeply flawed, but it's not nothing.
Corn
The bar was on the floor.
Herman
The bar was subterranean. And I think the real question going forward is whether the transparency requirements can evolve to become more useful. Could subprocessor notifications include more meaningful information — like the specific categories of data involved, the purpose of the processing, the retention period? Could there be standardized formats that make it easier to compare disclosures across services? Could there be a middle ground between "here's a company name, good luck" and full public disclosure of confidential contracts?
Corn
That seems like the natural next step. If the point of the regulation is to enable scrutiny, then the quality of the disclosure matters. A name and a vague description isn't scrutiny-enabling. It's scrutiny-shaped.
Herman
I'm going to use that.
Corn
You're welcome to it. But I want to push on something you said earlier about the "someone somewhere" theory. If the system depends on institutional watchdogs to function, then it only works in jurisdictions where those watchdogs exist and can operate freely. What happens in countries where privacy advocacy is suppressed or nonexistent?
Herman
That's a crucial point. The GDPR model assumes a functioning civil society with active privacy organizations, independent regulators, and a free press. In countries without those, the notification requirement becomes pure theater — it creates the appearance of accountability without any mechanism for accountability to actually occur.
Corn
It's a system that works best where it's needed least.
Herman
In jurisdictions with strong rule of law and active civil society, companies are probably already more careful about data handling, and the notification requirement adds a layer of accountability. In jurisdictions with weak institutions, companies can do whatever they want, and the notification requirement is just a box to check.
Corn
Which is a pretty fundamental design flaw. The regulation assumes the existence of the very institutions it's supposed to strengthen.
Herman
This is true of a lot of transparency-based regulation. Sunshine is said to be the best disinfectant, but someone has to be standing in the sunlight to see what's revealed. If nobody's watching, the disclosure doesn't matter.
Corn
Let me ask you something practical. You're a former pediatrician — you've dealt with patient data, medical records, HIPAA in the US context. How does the subprocessor concept map onto healthcare?
Herman
Healthcare is actually an interesting parallel because it's one of the few domains where data supply chains have been regulated for decades. Under HIPAA, if a hospital shares patient data with a billing company, that billing company is a business associate — roughly analogous to a processor under GDPR. And if the billing company subcontracts part of its work, the subcontractor becomes subject to similar obligations. The difference is that in healthcare, the data flows are often more standardized and the categories of data are clearer. You know what a medical record is. You know what a billing code is. With a general-purpose cloud service, the data could be anything.
Corn
Healthcare has an advantage because the data itself is more legible. A subprocessor notification in a medical context could actually tell you something meaningful, because "patient treatment data" means something specific.
Herman
And that points to a broader issue with subprocessor notifications in the consumer internet context: the data categories are so broad as to be meaningless. "Personal information" under GDPR covers everything from your name and email to your browsing history and location data and political opinions. A notification that says a subprocessor will process "personal information" tells you nothing about what kind of information, for what purpose, under what constraints.
Corn
It's like a nutrition label that just says "contains food.
Herman
That's the user experience problem in a nutshell. The notification is precise about the least important thing — the legal name of a company you've never heard of — and vague about everything you'd actually want to know.
Corn
If we were redesigning this from scratch, what would a useful subprocessor notification look like?
Herman
I'd want a few things. First, the categories of data involved — not just "personal information," but something like "account credentials, user-generated content, usage analytics." Second, the purpose of the processing — authentication, storage, analytics, AI training, whatever. Third, the jurisdiction and the legal basis for any cross-border transfer. Fourth, whether the subprocessor has access to data in cleartext or only in encrypted form. Fifth, the retention period — does the subprocessor delete data when the service ends, or do they keep it?
Corn
Probably a link to the actual data processing agreement, or at least the relevant clauses. If the whole point is transparency, make the contract transparent.
Herman
That's the radical proposal, and it's one that privacy advocates have been pushing for. But companies resist because DPAs contain commercially sensitive information about their infrastructure and vendor relationships. There's a genuine tension between transparency and commercial confidentiality.
Corn
Though I'd argue that if your commercial confidentiality depends on users not knowing what you're doing with their data, that might be a sign that what you're doing is not great.
Herman
That's a fair point. And it's worth noting that some companies have moved in this direction voluntarily. There are services that publish fairly detailed subprocessor lists with descriptions of what each subprocessor does and what data they handle. It's not universal, but it's becoming more common.
Herman
A number of the larger enterprise SaaS platforms — think collaboration tools, cloud infrastructure providers. They've realized that their business customers, especially in regulated industries, actually need this information for their own compliance. So they provide it, not because the regulation requires that level of detail, but because the market demands it.
Corn
The enterprise market is driving better transparency than the consumer market, because enterprise customers have actual leverage. They can demand detailed subprocessor information as a condition of the contract, and they get it. Individual consumers get the generic email.
Herman
That's one of the underappreciated dynamics in data protection: there's a two-tier system. Business users get real transparency because they have negotiating power. Individual consumers get transparency theater because they don't.
Corn
Which brings us back to Daniel's original question: what's the point of sharing this information with users? The answer seems to be that for individual users, there isn't much point. The point is for everyone else.
Herman
The "everyone else" includes the company's own compliance team, who now have to maintain a subprocessor list and think about who they're sharing data with. It includes the regulators, who can audit those lists. It includes the privacy advocates and journalists who can spot problems. And it includes the competitor companies who can see what the market standard is.
Corn
The individual user is essentially a delivery address for a notification that's really intended for a broader ecosystem. You're cc'd on a conversation that's not actually for you.
Herman
You're cc'd on a compliance conversation.
Corn
Like most cc'd emails, the correct action is to file it and move on with your life.
Herman
Unless you happen to be the kind of person who reads cc'd emails and spots things. Which most people aren't, and that's fine. The system doesn't actually need most people to read them. It just needs enough people to read them that the threat of scrutiny is credible.
Corn
The minimum viable scrutiny threshold.
Herman
Corn
Alright, so to pull this together: a subprocessor is a subcontractor to your service provider's service provider. Under GDPR and similar laws, you have to be notified when they change. The notification is almost entirely unactionable for individuals. The real purpose is to create a discoverable record that enables institutional oversight, and to force companies to audit their own data supply chains. The system works imperfectly, works better in jurisdictions with strong civil society, and creates a two-tier transparency regime where businesses get real information and consumers get a company name and a shrug.
Herman
That's a solid summary. And I'd add: if you're a consumer who actually wants to act on one of these notifications, look at the jurisdiction of the subprocessor, cross-reference their privacy policy with what the controller says about them, and treat frequent subprocessor churn or a very long subprocessor list as a yellow flag. But also, don't feel bad about ignoring most of these emails. You're not failing at data protection. The system just isn't designed for you to act on them individually.
Corn
The system designed for you to not act, but to feel informed while not acting.
Herman
That's the less charitable version, but it's not entirely wrong.
Corn
Now: Hilbert's daily fun fact.

Hilbert: In the 1920s, the St. Petersburg paradox — a puzzle about how much you should pay to play a coin-toss game with infinite expected value — led to Daniel Bernoulli's concept of diminishing marginal utility, which later became foundational to modern economics. But the unintended consequence was that early probability theorists spent decades arguing about whether the paradox revealed a flaw in expected value theory or a flaw in human psychology, and the debate inadvertently delayed the development of behavioral economics by about forty years, because everyone was too busy defending their mathematical models to consider that maybe people just don't think that way.
Corn
Probability theory got stuck in a philosophical cul-de-sac because nobody wanted to admit that humans are irrational.
Corn
This has been My Weird Prompts. Our producer is Hilbert Flumingtop. You can find us at myweirdprompts dot com. We'll be back with another one soon.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.