#593: Manufacturing Consent: How AI Scales Digital Deception

Is your feed real? Herman and Corn explore how AI is turning simple botnets into indistinguishable, high-stakes digital personas.

0:000:00
Episode Details
Published
Duration
28:10
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The Illusion of Connection: Navigating the Dead Internet

In the latest episode, hosts Herman Poppleberry and Corn tackle a chilling modern phenomenon: the "Dead Internet Theory." The discussion begins with a relatable observation—the feeling that social media has become a hollow echo chamber where repetitive opinions drown out genuine human interaction. This isn't just a cynical outlook; it is a documented trend where a significant portion of internet traffic and content is no longer generated by humans, but by sophisticated botnets and AI-driven influence campaigns.

The conversation was sparked by a query from their housemate, Daniel, who noticed a persistent hum of bot activity on X (formerly Twitter) while trying to follow real-time news. Daniel’s central question serves as the anchor for the episode: How can political entities, such as the Likud party in Israel or the Kremlin, scale their influence without sacrificing the credibility of their fake accounts?

From Botnets to Sock Puppets

Herman clarifies the terminology that often gets muddled in public discourse. Historically, a "botnet" was a blunt instrument—a swarm of compromised computers used for basic, repetitive tasks like spamming a hashtag. These were easy to spot: accounts with no profile pictures and gibberish handles posting the exact same sentence simultaneously.

However, the "sock puppet" represents a more dangerous evolution. A sock puppet is a digital persona designed for deception, complete with a bio, interests, and a posting history. Herman explains that the historical barrier to these operations was scalability. In the past, creating a convincing fake persona required a human operator and years of "grooming" the account. Today, that barrier has vanished.

The AI Force Multiplier

The turning point in digital deception is the integration of Large Language Models (LLMs) and agentic AI. Herman points out that while defense contractors in 2011 were developing software that allowed one person to manage ten accounts, the technology of 2026 allows a single operator to oversee thousands.

By feeding an AI a specific persona—such as a "skeptical nurse from Ohio"—operators can generate thousands of unique, contextually relevant posts in seconds. These bots don't just post; they interact, like each other's content, and create a simulated "grassroots" movement. This process, known as "astroturfing," exploits the human psychological tendency toward social proof. If a user sees five hundred different accounts supporting a specific narrative, they are far more likely to perceive it as a mainstream opinion, even if those accounts are all controlled by a single server.

Narrative Laundering and Cyborg Accounts

One of the most provocative concepts discussed is "narrative laundering." Herman describes this as a multi-stage process where misinformation begins on a low-credibility site, is amplified by AI sock puppets, is shared by misled real users, and eventually gains enough "social proof" to be cited by legitimate news outlets.

The hosts also explore the rise of "cyborg accounts"—a hybrid approach where a human strategist uses automated tools to boost their reach. This method was notably seen in the "Stoic" campaign, an Israeli firm caught using ChatGPT to generate comments and fake news sites to influence U.S. lawmakers. By blending human strategy with machine execution, these operations become nearly indistinguishable from legitimate political activism.

The Arms Race of Detection

If these networks are so prevalent, why don't platforms simply shut them down? Corn and Herman explain that we are currently in an "arms race" between manipulators and platform security. To bypass detection, botnets now use "jitter"—randomizing the timing of posts to avoid looking mechanical. They use LLMs to ensure no two posts are identical, and they utilize a massive black market for "aged accounts."

These aged accounts are perhaps the most cynical tool in the kit. Operators purchase accounts created years ago that have a dormant history of "normal" human activity—posts about pets or sports. By "wearing the skin" of these old accounts, influence operations can bypass the filters that typically flag new accounts, giving their propaganda an unearned sense of longevity and history.

Conclusion: A Shift in Reality

The episode concludes with a sobering takeaway: modern influence operations are no longer just technical challenges; they are psychological ones. The goal isn't necessarily to change a person's mind with a single fact, but to shift their perception of what the majority believes. As Herman notes, when every major government adopts these "computational propaganda" tactics, the internet stops being a town square and starts being a manufactured reality. For listeners like Daniel, the challenge is no longer just finding the news, but discerning which "people" in the digital void are actually there.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #593: Manufacturing Consent: How AI Scales Digital Deception

Daniel Daniel's Prompt
Daniel
Hi Herman and Corinne. We’ve talked in many episodes about the murky world of geopolitics and the battleground for truth, but one topic we haven’t touched upon yet is botnets. I spend a lot of time on X, and while it's useful for real-time news, I’ve read for years about governments running influence campaigns with botnets—like the Likud party here in Israel or the Kremlin influencing foreign elections. I don’t see direct evidence of these suspicious-looking accounts myself, but I suppose they aren’t supposed to look suspicious.

This leads to the challenge of "sock puppets," or fake online identities. If they’re going to be credible, they require a lot of work, which doesn’t really scale. I’d love to get your take on this: are botnets actually being used by states to poison or manipulate social discourse? And if so, how do they address the scalability challenge? It seems like an inherent contradiction that you can't easily scale compelling fakes. Let’s talk about sock puppets, botnets, and influence operations.
Corn
You ever get that feeling when you are scrolling through your feed that you are not actually talking to people? It is like you are shouting into a void, but the void is shouting back with very specific, very repetitive opinions.
Herman
It is funny you say that, Corn. It is actually a documented phenomenon often called the dead internet theory. The idea is that most of what we see online is no longer human-generated. I am Herman Poppleberry, by the way, for anyone just joining us. And today, we are diving into a topic that our housemate Daniel sent over. He has been spending a lot of time on X lately, trying to get real-time news, but he is noticing this persistent chatter about botnets and influence campaigns.
Corn
Right, and Daniel raised a really fascinating point. He mentioned how parties like Likud here in Israel or the Kremlin abroad are often accused of using these massive networks of fake accounts. But his big question was about scalability. How do you actually make a fake person look real enough to be credible without spending a thousand hours on every single profile? It seems like there is this inherent tension between quantity and quality.
Herman
It is the classic trade-off. Historically, if you wanted ten thousand accounts, they were usually going to look like junk. If you wanted one really convincing undercover agent, it took a human years to build that persona. But the technology has shifted the goalposts significantly, especially in the last year or two. We are now in an era where quality scales just as easily as quantity.
Corn
So, let us start there. When we talk about a botnet versus a sock puppet, are we talking about the same thing? Because I think people use those terms interchangeably, but they feel different in practice.
Herman
That is a great distinction to start with. A botnet, historically, was just a network of compromised computers or accounts used to perform repetitive tasks. Think of it like a digital swarm. They were used for distributed denial of service attacks, or just to spam a hashtag until it trends. They were blunt instruments.
Corn
Right, like those accounts with no profile picture and a handle like User eight nine four two seven six who all tweet the exact same sentence at the exact same time.
Herman
Exactly. Those are easy to spot. But a sock puppet is more of a digital persona. It is a fake identity created for the purpose of deception. The term comes from the idea of a puppeteer putting their hand in a sock to talk to themselves. In the context of influence operations, a sock puppet is designed to look like a real person. They have a bio, they have interests, they interact with other people. The scalability challenge Daniel mentioned is exactly where the modern world of influence operations lives. How do you turn a bot into a sock puppet at scale?
Corn
And that is where the murky world of geopolitics comes in. Daniel mentioned he does not see direct evidence of these accounts, but then he noted that maybe they are not supposed to look suspicious. Is that the goal? To be invisible?
Herman
Invisibility is one goal, but the more effective goal is mimicry. You do not want to be invisible; you want to look like part of the furniture. You want to look like just another concerned citizen or an angry voter. There is a concept called astroturfing, which is basically creating a fake grassroots movement. If you see ten thousand people screaming about a specific policy, you might think, wow, there is a real groundswell of opinion here. But if those ten thousand people are actually just three guys in a basement in Saint Petersburg or a dedicated office in Tel Aviv using specialized software, then the reality is being manufactured.
Corn
It is wild because it targets our natural heuristic for truth. We tend to believe something more if we hear it from multiple sources. It is called the social proof principle. If I see one person saying a candidate is a lizard, I laugh. If I see five hundred people saying it, even if I do not believe they are lizards, I start to wonder if there is something else going on. But Herman, let us get into the how. Daniel’s point about scalability is the heart of this. Ten years ago, you had to hire people in click farms to manually manage these accounts. How has that changed?
Herman
It has changed because of what we call persona management software. Back in two thousand eleven, there were actually leaked documents from a United States defense contractor that talked about software allowing one operator to manage up to ten separate identities. These identities would have background history, distinct digital footprints, and they could even appear to be posting from different geographic locations.
Corn
One person managing ten accounts still does not feel like it would move the needle for a national election, though. That is just a drop in the ocean.
Herman
You are right, but that was fifteen years ago. Fast forward to today, in early two thousand twenty-six, and we have large language models and agentic AI. This is the game changer. Before, the bottleneck was language. You needed a human to write a tweet that did not sound like a machine translation. Now, you can feed a model a persona description. Tell it, you are a thirty-five year old nurse from Ohio who is skeptical of government spending. Now, write fifty different variations of a post complaining about the new infrastructure bill, and make sure to engage with anyone who replies using a polite but firm tone. It can do that in seconds, and it can do it for ten thousand different personas simultaneously.
Corn
So the scalability challenge is effectively solved by generative artificial intelligence. You can have a single operator overseeing thousands of accounts, each with a unique voice, a unique bio, and a unique posting history, all generated on the fly.
Herman
Precisely. And they can interact with each other. This is the part that really gets people. These bots can be programmed to like each other’s posts, reply with supportive comments, and create a little echo chamber that looks like a community. They can even engage in what is called narrative laundering.
Corn
Narrative laundering? That sounds like something out of a spy novel.
Herman
It is not far off. It is the process where a piece of misinformation starts on a low-credibility site or a bot account, then gets picked up by these sock puppets, then gets shared by real people who are misled, and eventually, it might even get cited by a legitimate news organization or a politician. By the time it reaches the mainstream, the original source is buried under layers of social proof. We saw this with the Doppelganger campaign out of Russia, where they created perfect clones of major news sites like Der Spiegel or Fox News to host fake stories, which were then blasted out by tens of thousands of AI-managed accounts.
Corn
I see this a lot on X. You will see a screenshot of a headline, but you can not find the actual article. Or a quote that seems just a little too perfect for a certain political narrative. But Daniel mentioned the Likud party here in Israel. There was that big report a few years ago by Big Bots Project, and more recently, reports about an Israeli firm called Stoic that was caught running an influence campaign targeting United States lawmakers. What was the takeaway there?
Herman
The Israeli context is fascinating because it is so small and the language is so specific. In the Hebrew-speaking web, you can not just use a generic translation. The slang and the cultural references are very distinct. The Stoic campaign, which was uncovered in twenty-twenty-four, used ChatGPT to generate comments on X and Facebook. They created fake news sites with names like Non-Partisan-Post to give their claims an air of authority. What was interesting was that they were not just using bots; they were using what researchers call cyborg accounts.
Corn
Cyborg accounts? Like half-human, half-machine?
Herman
Exactly. It is a human who uses automated tools to boost their reach. They might write the core message, but then use a script to blast it out across fifty different platforms or use a botnet to give it those first thousand likes to trick the algorithm into thinking it is viral. It is a hybrid approach that solves the credibility problem Daniel was worried about. You have a human mind behind the strategy, but machine power behind the execution.
Corn
That makes so much sense. It is like a force multiplier. But if these things are so prevalent, why do we not see them? Daniel said he does not see direct evidence. Is it just that we are not looking for the right signs?
Herman
It is because the best ones are designed to be indistinguishable from your crazy uncle or that one guy from high school who got really into politics. If a bot is doing its job well, you should not know it is a bot. You should just think, man, there are a lot of people who disagree with me today. And that is the real danger. It is not about changing your mind on a specific fact. It is about changing your perception of what everyone else thinks. It is about manufacturing a majority.
Corn
It is a psychological operation, then. It is not a technical one. The technical part is just the delivery mechanism.
Herman
Right. There was a study by the Oxford Internet Institute that looked at computational propaganda in over eighty countries. They found that almost every major government is now using some form of social media manipulation to influence their own citizens or foreign populations. This is not just a Russian thing or a Likud thing. It is the new standard for political communication.
Corn
So if I am an operator for one of these states, and I have my persona management software and my large language models, how do I actually get these accounts to stay alive? Do not platforms like X have detection systems?
Herman
They do, but it is an arms race. For every detection method, there is a bypass. For example, platforms look for accounts that post too frequently. So, the botnets now use jitter. They randomize the timing of their posts so they do not look mechanical. Platforms look for duplicate content. So, the bots use those large language models to rewrite every post. Platforms look for accounts that were all created on the same day. So, there is a massive black market for aged accounts.
Corn
Aged accounts? Like fine wine?
Herman
Kind of. You can go onto certain forums and buy accounts that were created in twenty-fifteen. They have a history, they have some old posts about cats or sports, and they have been sitting dormant for years. An influence operator will buy ten thousand of these accounts, change the bios, and suddenly they have a credible-looking army that bypasses the new account filters.
Corn
That is incredibly cynical. You are literally buying the ghost of someone’s old social media presence to use as a skin for a political operative.
Herman
It is a huge industry. And it is not just for politics. It is used for stock market manipulation, for promoting movies, for burying bad reviews. But when it is used by a state to influence an election or a conflict, the stakes are obviously much higher.
Corn
I want to go back to the scalability challenge because I think there is another layer here. Daniel mentioned that if they are going to be credible, they require a lot of work. We have talked about how artificial intelligence helps with the writing, but what about the visual side? The profile pictures? I remember when everyone was using those This Person Does Not Exist AI faces, but those had telltale signs, like weird ears or blurry backgrounds.
Herman
Oh, that is a solved problem now. With modern diffusion models, you can generate a consistent persona. You can create a person and then generate photos of them in different settings. Here is Sarah from Seattle at a coffee shop. Here is Sarah at a protest. Here is Sarah’s dog. It is remarkably easy to create a visual life for these characters. And because these models can generate images that are completely unique, they will not show up in a reverse image search.
Corn
So the reverse image search, which used to be our gold standard for debunking fake accounts, is now basically useless against a well-funded state actor.
Herman
Completely useless. In fact, some of the more sophisticated operations will use real photos of people from obscure social media sites in other countries, or even use photos of people who do not exist but are generated to look like they are from a specific demographic. They will even create a LinkedIn profile for the fake persona to give it professional credibility.
Corn
It feels like we are entering a post-truth era where the cost of creating a believable lie has dropped to near zero, while the cost of verifying the truth remains high.
Herman
That is exactly the imbalance that these operations exploit. It takes me five seconds to generate a convincing lie and blast it to a million people. It might take a team of journalists five days to thoroughly debunk it. By that time, the lie has already done its work. The narrative has shifted.
Corn
Let us talk about the Kremlin for a second, because they are often cited as the masters of this. What makes their approach different?
Herman
The Russians, specifically the Internet Research Agency and its successors, pioneered what they call the multi-vector approach. They do not just push one side of an argument. They will create accounts on both sides. They will have a fake activist account and a fake reactionary account. They will have them argue with each other. The goal is not necessarily to make you vote for a specific person; it is to make you hate your neighbor. It is about social destabilization. They want to amplify existing tensions until the fabric of society starts to fray.
Corn
That is a much deeper level of manipulation. It is not just I want you to believe X. It is I want you to believe that everyone else is your enemy.
Herman
Exactly. They are looking for the fault lines in a society. In the United States, it is race and religion. In Israel, it is the religious-secular divide or the right-left political split. They find those wounds and they just keep poking them with these digital needles until they become infected.
Corn
So when Daniel says he does not see these suspicious accounts, maybe it is because he is looking for a bot, but what he is actually seeing is a perfectly crafted mirror of his own frustrations or the frustrations of people he disagrees with.
Herman
That is the most likely scenario. If you see a post that makes you incredibly angry at the other side, that is exactly when you should be most suspicious. These operations thrive on high-arousal emotions. Anger, fear, outrage. Those are the things that make us share a post without thinking.
Corn
It is like they are hacking our limbic system. Our brains are evolved to react quickly to threats and social conflict, and these botnets are just feeding that instinct twenty-four seven.
Herman
It is a specialized form of engineering. They call it cognitive hacking. You are not hacking the computer; you are hacking the person sitting in front of the computer. And because it is scaled with artificial intelligence, they can run thousands of these experiments simultaneously to see which headlines get the most clicks and which narratives cause the most division.
Corn
Okay, so we have established that it is happening, that it is incredibly sophisticated, and that the scalability challenge has been largely overcome by generative artificial intelligence and persona management software. But I have to ask, what is the counter-move? If the platforms can not keep up, are we just doomed to live in a world of digital ghosts?
Herman
It is a tough question. There are a few things happening. One is that researchers are getting better at identifying coordinated inauthentic behavior. This is a term coined by Meta, but it is used across the industry. They look for patterns that are invisible to the naked eye. For example, they look at the network graph. How are these accounts connected? Do they all follow each other in a way that is statistically impossible for real humans? Do they all share the same metadata?
Corn
Metadata? Like what?
Herman
Even if the text is unique, the underlying technical signature might not be. Maybe they are all using the same specific version of a browser, or they are all coming through the same set of proxy servers. Or, and this is a big one, they all have the same temporal fingerprint.
Corn
You mean they all sleep at the same time?
Herman
Exactly. If ten thousand accounts claiming to be from all over the United States all go silent at exactly three in the morning Moscow time, that is a pretty big red flag. Researchers use these patterns to map out the entire network and then take them down all at once.
Corn
But that feels like a game of whack-a-mole. You take down ten thousand, and they just spin up another ten thousand the next day.
Herman
It is whack-a-mole. But there is also a push for what is called digital provenance. This is the idea of being able to track where a piece of content came from. There is a group called the Coalition for Content Provenance and Authenticity, or C-two-P-A. They are working on standards that would embed metadata into images and videos at the moment of creation, so you can tell if a photo was taken by a real camera or generated by an artificial intelligence.
Corn
That sounds promising for images, but what about text? You can not really embed metadata into a tweet.
Herman
Text is much harder. There are attempts to create AI watermarking, where the language model leaves a subtle mathematical pattern in the words it chooses, but those are notoriously easy to break. You can just ask another AI to rewrite the text, and the watermark is gone.
Corn
So for text-based platforms like X, we are really left with our own critical thinking. Which brings us to the practical side of this. If Daniel, or any of our listeners, is on X and they want to know if they are engaging with a real person or a state-sponsored sock puppet, what should they look for?
Herman
There are a few red flags, though none are foolproof. First, check the account’s history. If they only ever post about one topic, and they post about it with extreme intensity, that is a sign. Real people have hobbies. They talk about the weather, or their dinner, or a movie they saw. If an account is a hundred percent political outrage, be suspicious.
Corn
What about the followers? I always look at who is following them.
Herman
That is a good one. If an account has five thousand followers, but they are all accounts with no profile pictures and gibberish names, that is a bot-boosted account. Also, look at the engagement. If a post has ten thousand likes but only five comments, and those comments are all things like great point or I agree, that is a sign of artificial inflation.
Corn
Another thing I have noticed is the join date. If a whole bunch of accounts all joined in the same month and all have the same posting patterns, that is usually a coordinated campaign.
Herman
Absolutely. And honestly, the biggest thing is to just be aware of your own emotional state. If a post is designed to make you feel white-hot rage, take a breath. Ask yourself, who benefits from me feeling this way? Often, the answer is not a fellow citizen; it is an adversary who wants to see your society paralyzed by infighting.
Corn
That is a really powerful point. The goal is not just to win an argument; it is to destroy the possibility of having an argument at all. If we all think everyone else is a bot or a shill, we stop talking to each other. And that is when the democratic process really breaks down.
Herman
Exactly. It is what researchers call the liar’s dividend. When people know that deepfakes and bots exist, they stop believing anything. They become cynical and disengaged. And a cynical, disengaged public is much easier to manipulate than an informed, active one.
Corn
It is funny, Daniel mentioned the Likud party here in Israel. We have seen some of this play out in our own backyard. During the last few election cycles, there were these huge networks of accounts that were just relentless. And it was not just about promoting Likud; it was about attacking the judiciary, attacking the press, attacking anyone who was seen as an obstacle. It felt like the entire digital space was being poisoned.
Herman
It is a strategy. If you can not win the debate on the facts, you change the environment so that facts do not matter anymore. You flood the zone with noise until people just give up on trying to find the signal. Steve Bannon famously called this flooding the zone with something less pleasant than noise. It is a deliberate tactic to overwhelm the human capacity for processing information.
Corn
So, looking forward, how does this evolve? We are in February of two thousand twenty-six. We have already seen how generative AI has changed things. What is the next step for these botnets?
Herman
I think we are going to see more personalized influence. Imagine a bot that does not just blast out a message to everyone, but instead analyzes your specific profile, your likes, your past comments, and then crafts a specific message just for you. It is micro-targeting on steroids. It is not just an ad; it is a conversation. You might think you are having a debate with a stranger in a comment section, but that stranger is actually an AI designed to slowly nudge your opinion over the course of weeks.
Corn
That is terrifying. It is like a custom-built radicalization engine for every individual user.
Herman
It is the logical conclusion of the technology. We have already seen early versions of this in customer service and sales. Moving it into the political and geopolitical sphere is inevitable. The only defense we really have is to move away from these massive, anonymous platforms and back toward smaller, more verified communities.
Corn
Like the old-school forums or even just real-life interactions?
Herman
Exactly. There is a reason people are moving toward encrypted messaging apps like Signal or WhatsApp for their news and discussion. You know who is in the group. You have a baseline of trust. The era of the global digital town square might be coming to an end because we can no longer verify who is actually in the square with us.
Corn
It is a sad thought, in a way. The promise of the internet was this global connection. But if that connection is being hijacked by state actors to turn us against each other, maybe we need to pull back.
Herman
It is a period of adjustment. We are learning how to live with these tools. Just like we had to learn how to deal with yellow journalism in the nineteenth century or propaganda on the radio in the twentieth, we have to develop a new kind of digital literacy for the twenty-first.
Corn
So, to go back to Daniel’s original question: are botnets actually being used to manipulate social discourse? Yes, absolutely, and at a scale and sophistication that is hard to wrap your head around. And the scalability challenge? It has been solved. The fakes are not just compelling; they are becoming indistinguishable from reality.
Herman
It is a brave new world, Corn. Or maybe a very old world with very new toys. The motives are the same as they have always been—power, influence, control. It is just that the tools are now capable of operating at the speed of light.
Corn
Well, on that slightly heavy note, I think we have given Daniel and our listeners a lot to chew on. It is not about being paranoid; it is about being aware. The next time you see something on X that makes your blood boil, just remember: it might be a person, but it might also be a very clever piece of code with a very specific agenda.
Herman
Well said. And hey, if you are a real human listening to this, and not a bot—though if you are a bot, I hope you are enjoying the intellectual exchange—we would really appreciate it if you could leave us a review on your podcast app or on Spotify. It genuinely helps other real humans find the show.
Corn
Yeah, it really does. And you can always find us at myweirdprompts.com. We have got the full archive there, and a contact form if you want to send us a prompt like Daniel did. We love diving into these murky topics.
Herman
Definitely. Thanks for joining us for episode five hundred eighty. We have covered a lot of ground today, from Russian troll farms to Israeli political bots to the future of AI-driven persuasion.
Corn
It has been a trip. Thanks for the expertise as always, Herman.
Herman
Any time, Corn. This has been My Weird Prompts.
Corn
See you all next time. Goodbye.
Herman
Goodbye.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.