We have spent a lot of time over the last few months looking at the defensive side of the equation. We have talked about how Israel uses artificial intelligence to manage the sheer volume of incoming threats and how satellite-based anomaly detection has fundamentally changed the game for border security. But today, Daniel’s prompt is asking us to flip the perspective. He wants us to examine the other side of the hill and analyze how the Islamic Revolutionary Guard Corps might be leveraging these same technologies, not just for building better missiles, but for global strategic orchestration.
It is a pivot we really need to make, Corn, because if we only look at the kinetic side—the things that go bang—we are missing the most dangerous part of the picture. I am Herman Poppleberry, and the concept of the Islamic Revolutionary Guard Corps as an algorithmic adversary is much more representative of the threat we are seeing in mid-two thousand twenty-six than the old image of them as a regional spoiler. When we look at their operations today, we see a level of coordination suggesting they have moved beyond manual command and control and into something much more automated and much more difficult to intercept.
It is easy to fall into the trap of thinking that because a regime is ideologically fundamentalist, they must be technologically primitive. But we know that is not the case. Daniel pointed out that the Islamic Revolutionary Guard Corps is a sophisticated adversary, and we have to assume they are using every tool available to maximize their leverage, especially when they are outmatched in terms of conventional hardware. Think of it as a "Black Box" of Iranian artificial intelligence. We see the outputs—the unrest, the coordinated strikes, the shipping disruptions—but the logic inside that box has been largely opaque until very recently.
That is the core of their doctrine. They have always been masters of asymmetric warfare, and artificial intelligence is the ultimate asymmetric force multiplier. Most people think of artificial intelligence in a military context as killer robots or autonomous drones, but the real power for a group like the Islamic Revolutionary Guard Corps lies in strategic orchestration. It involves using data to decide where, when, and how to apply pressure to get the maximum psychological and political result for the minimum physical cost. They are not trying to win a head-to-head fight with a carrier strike group; they are trying to make the cost of opposition prohibitively high for the West.
So we are talking about moving from what used to look like chaotic, sporadic barrages to something much more calculated. If you look at the data from the conflicts in late two thousand twenty-five and early this year, there was a shift. It felt less like they were trying to overwhelm the defense and more like they were probing it for information. This leads us directly into the first major mechanism we need to discuss: Information Attrition.
You're right. We saw those intelligence briefings in January of two thousand twenty-six about the global unrest campaigns. At first, it looked like spontaneous protest, but the forensic data told a different story. It looked like a coordinated, global influence operation driven by specialized Large Language Models. In the past, if you wanted to run a global influence campaign, you needed thousands of people in a basement somewhere manually typing out posts. Now, you can use models to generate localized, culturally tuned narratives in a hundred different languages simultaneously.
And these are not just simple bots anymore. We are looking at autonomous "persona" agents.
That's right. These agents have backstories, long-term posting histories, and they engage in genuine-looking conversations. They are trained on the specific political grievances of different populations. So, the message being pushed in London regarding energy prices is fundamentally different from the one being pushed in New York regarding social justice, but they are all orchestrated to serve the same strategic goal: domestic instability in countries that oppose Iranian interests. The goal is exhausting the cognitive capacity of the public. If you can keep a population in a state of constant, low-level outrage, their government has much less freedom of maneuver on the international stage.
I want to get into the technical side of how they target these messages. You mentioned Graph Neural Networks earlier when we were prepping. How does that fit into the proxy network management?
This is where it gets sophisticated. The Islamic Revolutionary Guard Corps is using Graph Neural Networks to map social network vulnerabilities. Think of a social network as a giant web of nodes. A Graph Neural Network can analyze that web to identify the "super-spreaders"—not just people with a lot of followers, but people who act as "bridges" between different communities. If you can radicalize or even just influence one of those bridge nodes, the narrative spreads much more effectively. They are essentially performing a vulnerability scan on the social fabric of their adversaries.
It is like they are treating a foreign population the same way a hacker treats a computer network. They are looking for the open ports and the unpatched vulnerabilities in our social cohesion.
That's the strategy. And this same logic applies to their physical proxy networks. Managing a group like Hezbollah or the Houthis is a massive logistical challenge. You have to move weapons, money, and personnel through some of the most heavily surveilled territory on earth. We are seeing evidence that they are using predictive logistics models to optimize these routes. They ingest massive amounts of open-source shipping data, satellite imagery, and even social media feeds to identify windows of opportunity.
You called it "Waze" for smuggling in the original script, but it is actually much deeper than that.
It is. They use anomaly detection in reverse. Instead of looking for the one weird thing that indicates a threat, they are looking for the background noise they can hide within. If they know that a specific port, like Latakia or Aden, is overwhelmed with a certain type of cargo at a certain time of month, they use an algorithm to schedule their shipments to blend in perfectly with that noise. They can even simulate how Western reconnaissance satellites will view a specific route at a specific time, allowing them to time their movements to the second to avoid detection. It reduces the human overhead of managing these operations and makes the proxy network much more resilient. You do not need a central command node if the strategy is being distributed through these algorithmic tools.
This brings us to a point we touched on back in episode nine hundred twenty-nine when we looked at Iranian targeting patterns. They were not just firing missiles; they were conducting what you called a "diagnostic experiment." Can you explain how that has evolved with the new data from early two thousand twenty-six?
This is one of the most chilling developments. In the late two thousand twenty-five strikes, we noticed the Islamic Revolutionary Guard Corps was varying the flight paths and the intervals between launches in ways that did not make tactical sense if the goal was just to hit a target. But when you analyze it through the lens of machine learning, it makes sense. They were essentially "pinging" the Iron Dome and the Arrow systems. By analyzing how the defensive software responded—which targets it prioritized, how long it took to re-engage, what the sensor fusion logic looked like—they were mapping the logic of the defensive code.
So every intercepted missile was actually a successful data harvest.
That's the terrifying part. They are looking for the blind spots in the code, not just the blind spots in radar coverage. If they can figure out the heuristic the defense uses to prioritize threats, they can design a strike package that exploits that heuristic. For example, if the system is programmed to always prioritize the fastest-moving object, they can launch a fast, cheap decoy to draw fire away from a slower, more dangerous cruise missile. This is why we said in episode eleven hundred ninety-three that even failing missiles can be a strategic win. The physical failure is irrelevant if the diagnostic data helps the next model iteration be ten percent more effective.
It turns the entire theater of war into a continuous loop of learning and adaptation. But there is a major misconception we need to address here. People often say, "Well, Iran does not have the compute power of an OpenAI or a Google, so how can they be doing this?"
That is a fundamental misunderstanding of the current artificial intelligence landscape. You do not need a trillion-parameter model to do what the Islamic Revolutionary Guard Corps is doing. You do not need Artificial General Intelligence. You need specialized, narrow-task models. You can take an open-source model like Llama three and fine-tune it on a relatively small cluster of G-P-Us to be incredibly effective at generating propaganda or optimizing logistics. They are not trying to build a digital god; they are trying to build a digital insurgent. They are resource-constrained compared to the United States, but they are incredibly efficient at using what they have. They are using their constraints as a catalyst for innovation.
It is the difference between a broadsword and a scalpel. They are not trying to solve all of human knowledge; they are just trying to solve the problem of how to destabilize a specific region or bypass a specific sanction. And because they are using open-source foundations, they are essentially standing on the shoulders of the very Western tech companies they are trying to undermine.
And they are applying this "scalpel" approach to cyber operations as well. We are seeing them use artificial intelligence to automate the discovery of vulnerabilities in critical infrastructure. They are running automated penetration testing tools that can probe thousands of targets a second. They are looking for the one unpatched server in a water treatment plant or a power grid that gives them a foothold. It is a level of automated aggression that makes traditional human-led cyber defense almost impossible.
This leads us to the concept of "algorithmic escalation." If both sides are using these systems, what happens when they start interacting?
That is the nightmare scenario. We are moving into a world where the speed of conflict is outstripping human decision-making. If an Iranian logistics algorithm detects a change in Western patrol patterns and automatically shifts its assets, and then our defense algorithm sees that shift and automatically reallocates its resources, you could end up in a spiral of escalation before a human commander even realizes what is happening. The algorithms are optimizing for their programmed goals, but they might not account for the broader geopolitical consequences.
It is the "paperclip maximizer" problem, but with regional stability instead of paperclips. If you tell an algorithm to maximize unrest or maximize defensive posture, it will find the most efficient way to do that, regardless of whether that path leads to a full-scale war that neither side actually wants.
And that brings us back to the human element. Can an artificial intelligence-driven strategy ever truly account for human irrationality? The Islamic Revolutionary Guard Corps might have the best models in the world, but they are still dealing with human actors on the ground—proxies who might have their own agendas, or ordinary people who do not always act in their own best interest. The messiness of human nature is the one thing an algorithm has the hardest time predicting. It can model the behavior of a crowd, but it struggles with the choices of an individual who decides to do something unexpected.
So, for the people listening who are wondering what the practical takeaways are, how do we actually defend against this? You mentioned "data hygiene" earlier.
That is the first and most important step. We have to move from kinetic defense to data hygiene. If the Islamic Revolutionary Guard Corps is scraping open-source intelligence—everything from LinkedIn profiles of defense contractors to shipping manifests—to train their models, we have to be much more careful about what information we are putting into the world. We also have to start thinking about how to "poison" their data. If they are using algorithms to find patterns, we should be creating false patterns—digital camouflage—to lead them to the wrong conclusions.
Like providing so much conflicting data that the algorithm cannot find the signal.
Right. We also need to distinguish between "artificial intelligence-assisted" and "artificial intelligence-driven" operations. Assisted means a human is still making the strategic calls, but using tools to help with the execution. Driven means the algorithm itself is identifying the opportunities and suggesting the actions. We are seeing the Islamic Revolutionary Guard Corps move more toward the "driven" side, especially in their proxy management. They are giving their commanders on the ground tools that tell them exactly when and where to move based on real-time data analysis.
This is why it is so important to monitor for the fingerprints of synthetic media. Even as Large Language Models get better, there are still patterns in how they generate content and how that content is distributed across platforms. By monitoring for these correlations, we can identify influence operations in their early stages before they gain enough momentum to cause real-world harm. We need to develop a better sense of digital literacy that accounts for the fact that the narratives we see online might be generated by models designed specifically to manipulate our emotions.
It is a constant arms race. As soon as we find a way to detect their models, they will use our detection methods to train their models to be even more stealthy. But we have the advantage of being able to see the whole board. The Islamic Revolutionary Guard Corps is operating from a position of strategic depth, but they are also vulnerable to the same tools they are using. Their own internal logistics and command structures are just as susceptible to algorithmic analysis as ours are. If we can map their proxy networks with the same precision they use to map our social vulnerabilities, we can start to dismantle their influence piece by piece.
It feels like the gap between what the public perceives as Iranian capability and the reality is widening. People still see them as a regional power with some old missiles, but the digital reality is that they are operating as a global, tech-enabled insurgency. Daniel, thanks for the prompt. It really forced us to look at the darker side of these technologies we spend so much time talking about. We need to keep our eyes on how this evolves, especially as we head into the second half of two thousand twenty-six.
It is going to be a compelling, and likely very turbulent, few months. There is so much more we could dive into here, particularly regarding the specific types of hardware they are using to run these models—like the illicit G-P-U procurement networks we have been tracking—but we will save that for another time.
I agree. If you want to dig deeper into the foundations of this, I highly recommend checking out episode nine hundred twenty-nine, where we went into the technical weeds of the targeting data, and episode eleven hundred ninety-seven, which contextualizes the global campaign of unrest we saw earlier this year.
Thanks as always to our producer, Hilbert Flumingtop, for keeping the show running smoothly behind the scenes.
And a big thanks to Modal for providing the G-P-U credits that power this show and allow us to do this kind of deep-dive analysis. This has been My Weird Prompts.
If you are finding these discussions valuable, a quick review on your podcast app really helps us reach more people who are interested in the intersection of technology and global strategy.
We will be back soon with another prompt. Stay curious.
Goodbye.