#1520: Is Truth Illegal? The Global Crackdown on Fake News

Explore how countries are codifying "truth" into law and the high stakes of criminalizing disinformation in the age of AI.

0:000:00
Episode Details
Published
Duration
17:27
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The digital landscape is undergoing a fundamental transformation. What was once a debate over social media moderation has shifted into the realm of criminal justice. Across the globe, the stakes for sharing information have escalated from losing followers to facing imprisonment. This shift is most visible in regions of high conflict, where the definitions of truth and falsehood are being codified into law while the ground is still shifting.

The Criminalization of Narrative
In authoritarian contexts, "anti-fake news" laws are increasingly used to protect a regime’s monopoly on the narrative rather than to protect the public from falsehoods. In Iran, for example, the judiciary has recently charged over a hundred individuals under statutes regarding the "spreading of lies." These charges carry severe physical and legal penalties, illustrating how a law against disinformation can function as a tool for political suppression. When the state becomes the sole arbiter of truth, any information deemed inconvenient to the government can be labeled as "fake," effectively turning journalism into a criminal act.

AI and the Scale of Deception
The technical challenge of managing information has been exacerbated by the rise of synthetic media. Reports indicate a massive surge in AI-generated deepfakes targeting political opposition. Furthermore, the emergence of "pink slime" websites—automated shells that mimic legitimate local news outlets—has allowed for the mass production of pro-state narratives. This creates a "hall of mirrors" effect where the state uses AI to create disinformation, then cites the existence of that disinformation to justify restrictive new laws and crackdowns.

The Compliance Trap for Platforms
Democratic regions are also moving toward heavy regulation, though through financial rather than physical penalties. In Brazil and the European Union, new legislative frameworks like the Digital Services Act (DSA) impose massive fines—up to 10% of revenue—on platforms that fail to remove "manifestly illegal" content within narrow windows.

This creates a "compliance trap." Faced with the threat of bankruptcy-level fines, social media companies are incentivized to over-censor content rather than risk a legal dispute. This effectively turns private tech companies into deputy censors for the state. Instead of acting as neutral pipes for communication, platforms are being forced to manage discourse to meet regional regulatory standards.

The Rise of the Splinternet
As different nations adopt varying definitions of truth and legality, the dream of a global digital town square is fading. We are entering the era of the "Splinternet," where the internet is fractured into regional zones. In this new reality, truth is determined by local geography and the specific statutes of the ruling government. The core principle that the remedy for bad speech is more speech is being replaced by a system of managed discourse, where automated sentiment analysis and real-time detection tools ensure that only state-approved narratives survive.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #1520: Is Truth Illegal? The Global Crackdown on Fake News

Daniel Daniel's Prompt
Daniel
Custom topic: During the Iran War, we're seeing a vast amount of conspiracy theories and fake news being propagated. Dovetailing with our recent discussions about the limits of free speech, we should use this episo
Corn
I was reading through some legal briefs this morning, and it struck me how the stakes for being wrong online have shifted from losing followers to losing your freedom. It is a heavy way to start the day, but with everything unfolding in the Middle East right now, it feels like we have crossed a point of no return.
Herman
It is a massive shift, Corn. Herman Poppleberry here, and you are right. We are seeing a fundamental transformation in how the state views information. It is no longer just a social media moderation problem. It is becoming a criminal justice problem.
Corn
Today's prompt from Daniel is about whether fake news is actually illegal, and he is specifically pointing us toward the current conflict in Iran as a case study. It is a timely question because the definitions of truth and falsehood are being codified into law while the ground is still shaking.
Herman
Daniel is always spot on with the timing. What is happening in Iran right now is essentially the front line of this legal experiment. Just last week, on March eighteenth, the Iranian judiciary announced that over one hundred twenty people were charged under Article seven hundred forty-six of their Islamic Penal Code. The charge is Nashr-e Akazib, which translates to spreading lies.
Corn
And we are not talking about a slap on the wrist or a suspended account here. These individuals are looking at up to two years in prison and seventy-four lashes. It is a brutal reminder that in an authoritarian context, the law against fake news is not about protecting the truth. It is about protecting the regime's monopoly on the narrative.
Herman
That is the core paradox we have to wrestle with today. Can you actually legislate truth without inadvertently creating a Ministry of Truth? Because while we look at Iran and see an obvious tool for suppression, over fifty-five countries around the world have now passed or proposed their own versions of anti-fake news laws as of early twenty twenty-six.
Corn
It feels like a global contagion. Every government looks at the chaos of the digital town square and thinks, if we just had the right statute, we could fix this. But the definition of fake news is notoriously slippery. Is it a provable factual error? Is it a misleading headline? Or is it just information that the government finds inconvenient during a crisis?
Herman
That is exactly where the technical side gets so messy. If you look at the UN Human Rights Council report from March twelfth, they noted a four hundred percent increase in artificial intelligence generated deepfakes targeting Iranian opposition leaders. When you have that level of synthetic deception, governments argue they have no choice but to step in. They say the marketplace of ideas has been poisoned by bots and algorithms.
Corn
I can hear the counter-argument already. If the government is the one deciding what is a deepfake and what is a leaked video of a protest, then the law becomes a filter for reality. You have Gholam-Hossein Mohseni-Eje'i leading the Iranian judiciary right now, and his track record is not exactly one of neutral fact-finding. He is using Article seven hundred forty-six as a digital dragnet.
Herman
He really is. And it is not just about individual posts. What fascinates me from a technical perspective is the rise of what the Global Disinformation Index calls pink slime websites. Their March twenty twenty-six report identified over fifteen hundred of these fake local news outlets. They look like legitimate, independent journalism, but they are actually automated shells spreading pro-state narratives about the border escalations.
Corn
I remember we touched on the theory behind this in episode five hundred ninety-three when we talked about the Dead Internet Theory and how artificial intelligence scales digital deception. But seeing it weaponized like this in Iran is a different beast. It is one thing for a bot to try to sell you a cryptocurrency. It is another thing for fifteen hundred fake news sites to manufacture a justification for internal crackdowns.
Herman
It creates this hall of mirrors effect. The state uses artificial intelligence to create the fake news, and then uses the existence of fake news as a justification to pass laws that allow them to arrest anyone who contradicts the state-approved version of events. It is a closed loop of information control.
Corn
So, let's look at how this is playing out in more democratic contexts. Because it is easy to point at Iran and say that is a human rights violation. But what about Brazil or the European Union? They are not using lashes, but they are using some pretty heavy financial hammers.
Herman
Brazil is a fascinating comparison. Just two days ago, on March twenty-second, their Supreme Federal Court upheld the key provisions of what people call the Fake News Bill, or PL twenty-six thirty. During periods of social unrest, platforms are mandated to identify and remove what the law calls manifestly illegal content within twenty-four hours.
Corn
Manifestly illegal is a very broad term. Who is making that call in the heat of a protest? If a platform fails to take it down in that twenty-four hour window, they can be fined up to ten percent of their national revenue. That is a massive incentive for a company like Meta or Google to just over-censor everything the moment a situation gets tense.
Herman
That is the compliance trap. If the fine is ten percent of your revenue, you do not spend time debating the nuances of free speech. You just hit the delete button. It effectively turns private companies into the state's deputy censors.
Corn
It is the same logic we see with the European Union's Digital Services Act. On March nineteenth, the European Commission opened formal proceedings against the platform formerly known as X. They are looking at systemic risks related to Iranian state-sponsored disinformation. Under the Digital Services Act, the fines can go up to six percent of global annual turnover.
Herman
Six percent of global turnover is enough to bankrupt a platform or at least force a total retreat from a market. What I find technically interesting about the Digital Services Act is that it does not just look at individual pieces of content. It looks at the architecture of the platform. It asks whether the algorithms are designed in a way that prioritizes sensational, fake content over verified information.
Corn
I like that approach better in theory, but in practice, it still puts a group of unelected bureaucrats in Brussels in charge of deciding what constitutes a systemic risk. If I am posting a critique of European energy policy that happens to go viral, is that a systemic risk? Is that disinformation if I use a statistic they disagree with?
Herman
That is where the friction lies. The European Commission argues they are just enforcing transparency and risk mitigation. But when you apply those same frameworks to a high-tension situation like the current conflict in Iran, the lines get very blurry. We saw this in episode fifteen hundred sixteen when we discussed the evolution of the false flag. Modern disinformation is basically a digital false flag operation. It is designed to look like a grassroots movement or a legitimate news report to trick the observer.
Corn
And that is why the Iranian situation is such a perfect, albeit tragic, case study. You have a four hundred percent surge in deepfakes. You have fifteen hundred fake news sites. The information environment is genuinely toxic. But is the solution a law that allows the state to whip people for spreading lies?
Herman
Obviously not from a human rights perspective, but from a state preservation perspective, it is the only tool they have left. When you lose the ability to convince people, you resort to the ability to coerce them. The UN report from March twelfth was very clear about this. They called for a global moratorium on these vague disinformation laws because they are almost always used to stifle political dissent.
Corn
I think people often misunderstand what these laws are actually doing. They think it is about stopping Uncle Leo from sharing a conspiracy theory on Facebook. But in reality, as we are seeing in Iran, it is about creating a legal mechanism to silence opposition leaders and journalists. If you report on a border skirmish that the government wants to keep quiet, you are not a journalist anymore. You are a criminal spreading lies under Article seven hundred forty-six.
Herman
It also erodes the traditional neutral platform defense. For decades, the internet operated on the principle that the platform was not responsible for what the users said. But these new laws in Brazil and the European Union are essentially ending that era. They are saying the platform is responsible for the aggregate effect of the speech it hosts.
Corn
It is a total pivot. We are moving from an internet of permissionless speech to an internet of managed discourse. And the technical tools are keeping pace. I was reading about how the Iranian judiciary is using automated sentiment analysis to flag accounts that are deviating from the state narrative in real-time. It is not just that fake news is illegal. It is that the detection of it has been automated.
Herman
That is the terrifying part. When you combine a vague law with automated enforcement, you get a system where you can be charged, sentenced, and penalized before a human being has even looked at the context of your post. Gholam-Hossein Mohseni-Eje'i has been very vocal about using these high-tech solutions to clean up the digital space.
Corn
Clean up is a very sanitized way of saying purge. But Herman, let's talk about the economic side of this. If you are a social media platform and you are facing a ten percent revenue fine in Brazil and a six percent global fine in Europe, and you are also trying to navigate the minefield of Iranian sanctions, what do you do?
Herman
You either leave the market entirely or you build a massive, expensive censorship apparatus that errs on the side of the government. This is what people call the Splinternet. We are seeing the global internet fracture into regional zones where the definition of truth is determined by the local regulator.
Corn
It makes the idea of a global digital town square feel like a naive dream from the two thousands. If the law defines truth, and every country has a different law, then truth becomes a matter of geography.
Herman
Which is why the conservative perspective on this is so important. We have always argued that the best remedy for bad speech is more speech, not government intervention. When you give the state the power to define what is fake, you are handing them a weapon that will eventually be used against you, regardless of which side of the aisle you are on.
Corn
It is the ultimate slippery slope. Today it is a deepfake of an opposition leader. Tomorrow it is a critique of a government's economic policy. The moment you concede that the state has the authority to regulate the truthfulness of a statement, you have lost the foundation of a free society.
Herman
And we are seeing this play out in real-time with the proceedings against X. The European Commission is essentially saying that the platform is not doing enough to stop the spread of Iranian disinformation. But X could argue that they are just a neutral pipe and that it is up to the users to discern what is real. The Digital Services Act says that is not good enough anymore.
Corn
It is a high-stakes game of chicken. If X refuses to comply, they get fined billions. If they do comply, they become an arm of the European government's information policy. There is no winning move for the platform.
Herman
There is also the issue of the pink slime sites. These fifteen hundred outlets in Iran are not social media posts. They are independent domains. You cannot just tell a platform to take them down. You have to go after the hosting providers, the domain registrars, and the search engines. It requires a level of total digital surveillance that is hard to imagine in a free country.
Corn
But it is exactly what Iran is doing. They have created a national intranet that allows them to cut off the outside world while flooding the internal market with their own state-sponsored fake news. Then they use Article seven hundred forty-six to arrest anyone who tries to point out the contradictions.
Herman
It is the perfect closed system. And as we look toward the future, the question is whether other countries will adopt the Iranian model under the guise of safety and security. We are already seeing the language of safety being used to justify the Brazilian bill and the European Digital Services Act. They say we are protecting the public from harmful disinformation. But who defines harm?
Corn
That is the million-dollar question. Or in the case of the European Union, the multi-billion-euro question. I think for our listeners, the takeaway has to be a heightened sense of digital literacy. You cannot rely on the law to protect you from fake news, because the law is just as likely to be used to feed you fake news.
Herman
I agree. You have to look at the source, check the metadata of images, and look for corroboration from multiple independent outlets. If a story seems perfectly designed to trigger your anger or fear, it is probably being pushed for that exact reason. Whether it is a deepfake from an Iranian bot farm or a pink slime article, the goal is the same: to bypass your critical thinking.
Corn
We also have to be aware of the false flags we discussed in episode fifteen sixteen. Just because a site looks like a local news outlet in Tehran does not mean it is being run by people in Tehran. It could be a state-sponsored operation from halfway around the world. The digital world has made it incredibly easy to wear a mask.
Herman
It really has. And the rise of generative artificial intelligence has made those masks much more convincing. The four hundred percent increase in deepfakes that the UN reported is just the beginning. As these tools get better and cheaper, the cost of generating high-quality disinformation drops to near zero.
Corn
Which means the volume of fake news will only increase, which will lead to more calls for more laws, which will lead to more government control over speech. It is a vicious cycle.
Herman
It is. And we have to ask ourselves if we are willing to pay the price of a managed internet. Are we okay with a world where a bureaucrat or a judiciary like the one in Iran gets to decide what we are allowed to see and say?
Corn
I think for most people, the answer is a resounding no, but the fear of disinformation is being used to nudge us in that direction. We see it in the way the Iranian conflict is being reported. There is so much noise and so much conflicting information that people are practically begging for someone to come in and tell them what is true.
Herman
And that is the most dangerous moment for liberty. When people are so overwhelmed by chaos that they are willing to trade their freedom for the illusion of certainty. Gholam-Hossein Mohseni-Eje'i knows this. He is counting on it.
Corn
It is a grim reality, but it is one we have to face head-on. Daniel's prompt really forced us to look at the dark side of information regulation. It is not just about facts and figures. It is about the power of the state over the individual's mind.
Herman
It really is. And as we move further into twenty twenty-six, this conflict between state-enforced truth and individual expression is only going to get more intense. The Iranian border escalations are just one theater in a much larger global war for information control.
Corn
Well, on that cheery note, I think we have covered the legal and technical landscape pretty thoroughly. It is a lot to digest, but it is better to be aware of the digital lashes before they start swinging.
Herman
It is. We have to keep our eyes open. This is not just a technology problem. It is a fundamental question of how we want to live in the twenty-first century.
Corn
Before we wrap up, I want to give a big thanks to our producer, Hilbert Flumingtop, for keeping us on track. And a huge thank you to Modal for providing the GPU credits that power this show. We literally could not do this without their support.
Herman
This has been My Weird Prompts. If you are finding these deep dives helpful, we would love it if you could leave us a review on your favorite podcast app. It really helps other people find the show and join the conversation.
Corn
We will be back next time with another prompt from Daniel. Until then, stay curious and keep questioning the narrative.
Herman
See you next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.