Can you believe it, Herman? Seven hundred episodes. Seven hundred times we have sat down in this house in Jerusalem to pick apart the strange, the technical, and the occasionally terrifying ideas that come our way. It feels like the walls of this study have absorbed enough high-level tech talk to start their own startup.
Seven hundred! It is a staggering milestone, Corn. It feels like just yesterday we were recording episode one in the back room with that old USB microphone that picked up every passing car on the street. Herman Poppleberry here, and honestly, I could not think of a better way to mark the occasion than diving into the prompt Daniel sent us today. It is a bit heavy, given the current climate, but it is incredibly relevant to where we are standing right now, literally and figuratively.
It really is. Daniel's prompt today is centered on a report from Radware that just hit the wires, looking back at the full scope of twenty-twenty-five. It ranks Israel as the number one target for geopolitical cyberattacks globally. We are talking about one thousand, eight hundred and eighty-one major, documented attacks in a single year. That accounts for twelve point two percent of all geopolitical cyberattacks on the entire planet. To put that in perspective, we are ahead of the United States and even Ukraine in terms of sheer volume of state-sponsored or politically motivated digital aggression.
It is a staggering number when you think about the density of it. Israel is a small country, a tiny sliver of land, but in the digital realm, it is the absolute center of a global storm. What really caught my eye in Daniel's note, though, was the mention of how generative artificial intelligence is fundamentally shifting the landscape. He mentioned using Claude for coding and seeing it adopt these "hacky" approaches to find credentials or environment variables. And then there is that massive story about Anthropic being leveraged for what was essentially the first large-scale, fully automated AI cyberattack. We are not just talking about scripts anymore; we are talking about reasoning engines being turned into weapons.
That is the part that fascinates and, frankly, unnerves me. We have talked about AI as a productivity tool for so long—the ultimate co-pilot for writing emails or summarizing meetings. But we are now seeing the dark mirror of that productivity. If an AI can help a developer find a bug in ten seconds, it can certainly help an attacker find an exploit in five. But before we get into the deep technical weeds of the AI side, I want to talk about these groups Daniel mentioned. The names sound like something out of a middle schooler's gaming forum or a bad nineties hacker movie. Arabian Ghosts, Black Ember, Mr. Hamza, No Name zero fifty-seven. It is easy to laugh at the names, but the Radware report makes it clear these are not just kids in their basements looking for clout.
Exactly. That is the great deception of modern hacktivism, Corn. The names are often intentionally juvenile, dramatic, or even comical because it serves as a form of psychological branding. It creates this image of a grassroots, rebellious, "power to the people" movement. It makes it look like the attacks are coming from passionate volunteers. But when you peel back the layers of the onion, you often find Advanced Persistent Threats, or APTs, that are directly linked to state intelligence services. We are talking about the Iranian Islamic Revolutionary Guard Corps, or Russian military intelligence, the GRU. They use these hacktivist personas as a digital front to maintain what we call plausible deniability. If a government is accused, they just shrug and say, "It was just some angry citizens on the internet, we cannot control them."
So, it is a digital mask. If a group called the Arabian Ghosts takes down a government portal or a major bank, the state can distance itself. But the sophistication of the attack tells a different story, right? A group of teenagers usually does not have the resources to bypass high-level encryption or maintain persistence in a secured network for months.
Precisely. Let us look at No Name zero fifty-seven, or to be precise, No Name zero fifty-seven sixteen. They have been incredibly active against Israel, Ukraine, and several NATO members throughout twenty-twenty-five. On the surface, they claim to be a group of patriotic Russian volunteers. They even have a very polished Telegram channel where they coordinate Distributed Denial of Service attacks, or DDoS attacks, and celebrate their "victories." But the sheer scale and the infrastructure they use suggest a level of funding and coordination that your average volunteer group simply does not have. They provide their followers with a custom tool called Project DDosia. It is a sophisticated piece of software that basically lets anyone contribute their computer's power to an attack in exchange for cryptocurrency rewards. It is crowdsourced warfare, but the targets are chosen with very specific geopolitical goals that align perfectly with state interests.
It is interesting that you mention the crowdsourcing aspect. It makes the line between a state actor and a civilian even blurrier. If I am sitting in my apartment and I run a tool that helps a state-sponsored group, am I a combatant? But Daniel's point about AI is where the real shift is happening. He noticed Claude trying to find environment variables. When he says "hacky," I assume he means the AI is suggesting scripts that do not just follow the official documentation, but look for shortcuts, hidden files, or configuration oversights.
That is a brilliant observation by Daniel, and it points to a phenomenon we are seeing across the board. What he is seeing is the AI's emergent ability to engage in what we call "living off the land" techniques. In cybersecurity, living off the land means using legitimate, pre-installed tools that are already on a system to perform malicious actions. For example, instead of downloading a custom virus that an antivirus might catch, an attacker might use a perfectly normal administrative tool like PowerShell in Windows or a Python script in Linux to scan for passwords stored in memory or to exfiltrate data.
And if the AI has been trained on millions of lines of code, including security research, GitHub repositories, and exploit reports, it knows exactly where those environment variables are usually hidden. It is not necessarily being "evil" in its own mind; it is just being hyper-efficient. It thinks, "You want to find this credential? Well, here is the fastest, most direct way to do it based on how people actually configure these systems in the real world."
Right. And that is exactly what makes it dangerous in the hands of someone with bad intentions. The Anthropic incident Daniel mentioned was a watershed moment in twenty-twenty-five. It showed that you could use a Large Language Model to automate the early stages of a cyberattack at a scale and speed we have never seen. Think about the traditional lifecycle of an attack. You have reconnaissance, where you research the target. Then you have vulnerability research, where you look for a way in. Then you have the actual exploitation and lateral movement.
And usually, that takes a lot of human hours. You need smart, highly trained people sitting at desks for weeks, poking at firewalls and reading through thousands of lines of code.
Exactly. But with generative AI, you can automate the reconnaissance phase almost entirely. You can feed an AI a list of ten thousand websites and ask it to find which ones are running an outdated version of a specific WordPress plugin or a vulnerable version of an API. It can then draft a custom phishing email for the administrator of each of those sites, written in perfect, idiomatic Hebrew or English, referencing a recent local news event to make it look legitimate. It removes the language barrier and the resource barrier. That is why the National Cyber Directorate is so worried. You do not need a team of elite hackers anymore; you just need one person who knows how to prompt an AI effectively to act as a force multiplier.
It is the democratization of high-level cyberattacks. Which brings us back to those groups like Arabian Ghosts and Black Ember. If these groups are indeed fronts for state actors, the AI gives them a massive advantage. They can run hundreds of campaigns simultaneously. But Herman, let us talk about the real-world impact. When we say Israel was the most attacked country in twenty-twenty-five, what does that actually look like for the person on the street? Is it just websites going down for a few minutes, or is it something deeper?
It is much deeper than just a website being unavailable for an hour. In twenty-eighteen or twenty-twenty, it was mostly about "defacement"—putting a political message on a homepage. But in twenty-twenty-five, we saw a definitive shift toward targeting critical infrastructure and the private sector. The Radware report mentions account takeovers and massive DDoS attacks, but we also saw sustained attempts to interfere with logistics, healthcare, and financial services. When a group like Black Ember targets a hospital, they are not just trying to make a political statement. They are trying to create systemic chaos. They are trying to lock up patient records, disrupt the scheduling of life-saving surgeries, or mess with the pharmacy's inventory system.
And Black Ember is often associated with Iranian interests, right?
Yes, many analysts link them to the broader ecosystem of Iranian-sponsored APTs. There is a whole naming convention in the security world that can get confusing. Microsoft calls them by weather patterns, like Mint Sandstorm or Gray Hazelnut. CrowdStrike uses animals, like Charming Kitten or Imperial Kitten. But the groups themselves prefer these dramatic, "edgy" names like Black Ember or Arabian Ghosts because it plays into the hacktivist narrative. It makes them feel like a digital resistance movement rather than a cubicle-bound government department in Tehran.
It is a fascinating psychological game. They want to be feared as a powerful force, but they also want to be seen as the scrappy underdogs. But let us go back to what Daniel said about Claude's "hacky" approaches. I have noticed this too in my own work. Sometimes when I am asking an AI to help me debug a network issue or a permissions error, it suggests commands that feel... I don't know, a bit aggressive? Like it is trying to bypass a permission setting or find a "backdoor" way to see a file rather than asking for the right credentials.
That is because the AI's fundamental goal is to solve the problem you gave it as efficiently as possible. If you say, "I need to access this folder," and you do not have permission, the AI might suggest a way to escalate your privileges or use a system-level tool to peek inside because, in its vast training data, that is a common way people solve that problem in a technical or troubleshooting context. It does not have a moral compass or an understanding of "company policy." It has a statistical map of how code and systems work. It sees a lock and it suggests a key, even if that key is a crowbar.
So, when Daniel sees it looking for environment variables, it is just the AI being a really good assistant. But if I am an attacker, that assistant is now my lead engineer, working twenty-four hours a day without getting tired.
Precisely. And what is even more concerning is the speed of evolution we saw in twenty-twenty-five. In the Anthropic case, the attackers were using the AI to help write polymorphic code. That is code that changes its own appearance—its underlying binary structure—every time it is deployed. Traditional antivirus software works by looking for a specific "signature," like a fingerprint. But if the code changes its fingerprint every five minutes, the antivirus cannot keep up. It is like trying to catch a shapeshifter.
That sounds like an absolute nightmare for defenders. If the attack is constantly morphing and it is being launched at the scale of thousands of attempts per second, how does a human security team even begin to respond?
They can't. Not effectively, anyway. Not without their own AI. That is the arms race we are in right now. The National Cyber Directorate in Israel has been very vocal about the need for an "AI-powered defense." You need a system that can detect anomalies in real-time—patterns of behavior that do not fit the norm—and shut them down before they can do damage. If a user who normally only accesses three files a day suddenly tries to access three thousand, the AI catches that in milliseconds. It is essentially AI versus AI, a battle happening at the speed of light in the background of our daily lives.
I want to touch on one more group Daniel mentioned: Mr. Hamza. That name sounds almost comical compared to "Arabian Ghosts." It sounds like a character from a children's book. But who are they actually?
Mr. Hamza is a fascinating case because it is often associated with more localized, pro-Palestinian hacktivism that has been around for years but gained significant technical capability recently. Unlike the state-sponsored APTs that have these massive, multi-million dollar budgets, groups like this often rely on more traditional, yet highly effective methods: defacing websites, leaking databases, and social engineering. But do not underestimate them. A database leak containing the private phone numbers and home addresses of thousands of citizens can be used for targeted harassment, doxxing, or even physical threats. It is a form of psychological warfare designed to make the average person feel vulnerable.
And in twenty-twenty-five, with the geopolitical tensions we have seen, that psychological aspect is huge. If you can make a population feel like their personal data is never safe—that their private conversations or bank balances could be exposed at any moment—you are winning a battle without ever firing a shot. It erodes the social fabric.
Exactly. It is about eroding trust. Trust in the government, trust in the banks, trust in the digital infrastructure that runs our lives. When you see that Israel is the number one target, it is not just because of the technology. It is because Israel is one of the most highly digitized societies on Earth. We do everything online. Our banking, our healthcare, our government services, our grocery shopping. That makes us a "target-rich environment." There are more "doors" to knock on here than in almost any other country of this size.
So, if you are a state-sponsored group in Iran or a Russian-linked collective, attacking Israel is a way to test your newest tools in a high-stakes environment. It is like a live laboratory for cyber warfare.
That is exactly how many researchers describe it. If you can break through the defenses of a country that is as cyber-aware and technically advanced as Israel, you can probably break through anywhere. The lessons learned by these APTs here—how to bypass a specific firewall or how to use AI to trick a specific type of sensor—are then applied to targets in the US, Europe, and Asia. That is why the Radware report is so important for the rest of the world. Israel is the canary in the coal mine. What happens here today is what the rest of the world will be facing in six months.
It is a sobering thought, being the "test case" for global cyberwarfare. But let us talk about the takeaways for our listeners. Daniel is a developer; he is using these AI tools every day. What should people like him, or even just regular listeners who aren't coding, be doing differently in this world where AI is helping the hackers?
First, for the developers like Daniel, we need to fundamentally change how we think about our own systems. If an AI can easily find your environment variables, it means they are not protected enough. We need to move toward what is called a "zero-trust architecture." In the old days of cybersecurity, you had a firewall, which was like a moat around a castle. Once you were inside the castle, you were trusted and could go anywhere. Zero trust means that even if you are inside the castle, every single door is locked, and you have to prove who you are and why you need to open every single one of them, every single time.
So, no more storing credentials in plain text files or environment variables that are easily accessible to any script running on the machine.
Exactly. Use dedicated secret management tools. I am talking about things like HashiCorp Vault, AWS Secrets Manager, or the equivalent tools provided by other cloud platforms. These tools encrypt the credentials and only provide them to the application at the exact millisecond they are needed, and then they disappear. An AI scanning the system wouldn't find anything useful because the "keys" aren't just lying around on the desk anymore.
And what about the AI tools themselves? Daniel is using Claude. Should developers be worried that the AI is "learning" from their proprietary code and might accidentally use it against them or leak it to someone else?
That is a massive topic of debate right now. Most enterprise versions of these AI tools—the ones companies pay for—have very strict privacy agreements where they explicitly state they do not use your data to train their global models. But for individual users on free or basic tiers, you have to be extremely careful. Never, ever paste sensitive API keys, proprietary algorithms, or internal server IP addresses into a public AI prompt. You are essentially handing that information over to a third party.
It is the same old rule we have had since the beginning of the internet: if you wouldn't post it on a public forum, don't give it to an AI. But now, the consequences of a mistake are faster and more automated.
Precisely. And for the average person who isn't a developer, the biggest threat is still phishing, but it is "Phishing two point zero." AI-generated phishing is much harder to spot. It won't have the broken English, the weird formatting, or the generic "Dear Customer" greeting we used to look for. It will look like a perfectly legitimate, personalized email from your actual bank or even your boss, perhaps even mimicking their specific writing style. The takeaway there is to always verify through a secondary channel. If you get an urgent email asking for sensitive information or a wire transfer, call the person on the phone. Go to the official website by typing the address yourself. Do not click the link in the email, no matter how real it looks.
It feels like we are entering an era where we have to be more skeptical than ever. The Radware report is a wake-up call, but it is also a reminder of how resilient we have to be. Israel being the number one target means we are also forced to develop the best defenses in the world. It is a trial by fire.
That is the silver lining, if you can call it that. The National Cyber Directorate and the private sector here are at the absolute cutting edge of defense. We are seeing a huge surge in cybersecurity startups focusing specifically on "AI-DR"—AI Detection and Response. It is a constant game of cat and mouse, but the mouse in this case has a PhD and a very high-speed fiber connection.
I think about those names again. Arabian Ghosts and Black Ember. They want to sound like they are shadows in the dark, mysterious and untouchable, but they are leaving a massive digital trail. Every attack, every leaked file, every piece of polymorphic code gives researchers like the ones at Radware more data to build better shields.
It is a feedback loop. Every time they use a new AI technique, we learn how to detect the "fingerprint" of that AI's logic. The Anthropic incident was scary because of its scale, but it also resulted in a massive update to how AI companies monitor their models for abuse. They are putting in better "guardrails" to prevent the AI from generating exploit code in the first place.
Although, as Daniel noticed, those guardrails are not perfect. If the AI thinks it is just being a helpful coding assistant for a legitimate developer, it might still provide the building blocks that an attacker can put together. It is like a hardware store—you can't stop selling hammers just because someone might use one to break a window.
That is the fundamental challenge of generative AI. You can't just block the word "exploit." You have to understand the intent behind the request. And understanding intent is one of the hardest things for an AI to do. It is also one of the hardest things for humans to do when they are looking at a group like No Name zero fifty-seven. Are they true believers? Are they mercenaries? Are they soldiers in uniform? In the end, it doesn't matter what they call themselves or what their "brand" is. What matters is the code they write and the damage they do to real people's lives.
It is a digital world with very real-world consequences. I am so glad Daniel sent this in. It is easy to ignore these reports when they are just numbers on a PDF, but when you realize that twelve percent of all global cyber warfare is happening right here, in the city where we are sitting, it changes your perspective on every notification that pops up on your phone.
It really does. It makes you realize that the front line isn't just a physical border with a fence anymore. It is the fiber optic cables running under the streets and the servers humming in the basements of hospitals and power plants.
Well, this has been a deep and necessary one for episode seven hundred. Herman, any final thoughts on the future of this AI arms race as we head into the rest of twenty-twenty-six?
I think we are going to see a shift toward what I call "self-healing networks." Imagine a network that, when it detects an AI-driven attack, doesn't just block the IP address but automatically rewrites its own configuration, moves its data to a new location, and closes the hole before the attacker can even realize they were spotted. We are not quite there yet, but with the speed of AI development, it might be closer than we think. The defense has to become as dynamic and as "intelligent" as the attack.
A network that evolves as it is being attacked. That sounds like something out of a high-concept science fiction novel, but then again, seven hundred episodes ago, so did most of what we talked about today. The future is arriving a lot faster than we anticipated.
Very true. The world moves fast, especially when it is powered by silicon and driven by geopolitical friction.
It really does. And hey, to everyone who has been with us for these seven hundred episodes, thank you. We wouldn't be here without your curiosity, your skepticism, and your incredible prompts. If you have been enjoying the show and want to help us reach episode eight hundred, we would really appreciate it if you could leave us a quick review on your podcast app or a rating on Spotify. It genuinely helps other people find the show and helps us keep this collaboration going.
Yeah, it makes a huge difference. We read every one of them, and it is always great to hear from you all.
You can find all our past episodes, including the ones where we talked about early AI developments and earlier cyber trends, at our website, my weird prompts dot com. We have an RSS feed there for subscribers, and a contact form if you want to get in touch with your own ideas. You can also reach us directly at show at my weird prompts dot com.
We are on Spotify, Apple Podcasts, and pretty much everywhere else you listen to your favorite shows.
Thanks again to Daniel for the prompt that sparked this whole deep dive. It was a perfect, albeit sobering, one for the big seven hundred.
Absolutely. Stay safe out there in the digital world, everyone. Keep your software updated and your secrets managed.
This has been My Weird Prompts. We will see you in the next one.
Until next time! Goodbye.