Hey everyone, welcome back to My Weird Prompts. I am Corn, and I am sitting here in our living room in Jerusalem with my brother. It is a bit of a rainy afternoon outside—typical for late January—but we have a very heavy topic to get into today. It is January thirty-first, twenty-twenty-six, and the data from last year is finally starting to paint a very clear, and somewhat sobering, picture.
Herman Poppleberry at your service. Yeah, it is a bit gray out there, which I suppose fits the mood of today’s prompt. Our housemate Daniel sent this one over to us this morning, and it is something that I think has been the quiet, or maybe not so quiet, anxiety underlying every conversation we have had about artificial intelligence over the last few years.
Exactly. We spend so much time talking about the cool things AI can do, the creative potential, the engineering breakthroughs, but Daniel is pushing us to look at the pragmatic, fundamental cost. Specifically, job loss. We are seeing it happen in real-time, especially in Tier One customer support, and he is asking what the game plan is. Are we just going to let this happen? Does the industry have an obligation to fix what it is breaking?
It is the big question. We are in early twenty-twenty-six now, and the landscape has shifted so much just in the last twelve to eighteen months. It is no longer a theoretical "what if" scenario. It is a "what now" scenario. According to the latest reports from early twenty-twenty-six, nearly forty percent of companies adopting AI are choosing to automate roles entirely rather than just augmenting human work. We are seeing roughly five hundred people a day losing their jobs specifically to AI integration in the tech sector alone.
Right. So let’s start with that customer support angle because that is where the blood is already on the floor, so to speak. Daniel mentioned how those old rule-based chatbots were terrible, and everyone hated them, but the new ones are different. Herman, from a technical perspective, what changed that made these things suddenly viable enough to actually replace thousands of human workers?
It is all about moving from rigid decision trees to semantic understanding and what we now call the C-U-A architecture—Perception, Reasoning, and Execution. In the old days, a chatbot was basically a complicated version of "press one for sales." If you did not use the exact keyword the programmer expected, the bot broke. But with the large language models of twenty-twenty-five and twenty-twenty-six, the bot actually understands intent. It can handle nuances, it can stay on track through a long conversation, and most importantly, it can access a company's entire knowledge base in seconds.
And that is the difference, right? A human agent has to search a wiki or ask a supervisor. The AI just has the entire manual in its active memory.
Precisely. We saw this really start to bite with the Klarna case study. By mid-twenty-twenty-five, they updated their figures to say their AI assistant was doing the work of eight hundred full-time agents. But here is the interesting twist we saw late last year: some companies, including Klarna, actually had to start rehiring or redeploying humans because the "all-in" AI approach led to a drop in service quality for complex cases. It turns out that while AI can handle the eighty percent of easy stuff, that final twenty percent of human messiness still needs a person.
But that leads to Daniel's point about the "unintended consequences." If you are a twenty-two-year-old looking for an entry-level job, or if you live in a region where outsourced customer service is a massive part of the local economy, that "efficiency" is a catastrophe. What industries beyond customer support are currently in the crosshairs as we sit here in early twenty-twenty-six?
Well, the "Tier One" phenomenon is spreading. It is not just support anymore. It is Tier One anything. Think about junior coding roles. In twenty-twenty-four, the U-K digital sector saw a massive forty-four percent drop in the number of sixteen to twenty-four-year-olds in computer programming roles. We are also seeing a massive displacement in the legal industry. About forty-four percent of general counsel across twelve countries are now using AI for Tier One tasks like contract review and discovery. The "middle" is being hollowed out.
I think that "hollowing out the middle" is the scariest part. Because if you take away the entry-level jobs, how does anyone ever become an expert? If the AI is doing all the junior-level coding or support, where do the senior engineers and managers of twenty-thirty-five come from? They won't have had those formative years of doing the "grunt work" where you actually learn how things break.
That is a brilliant point, Corn. We are essentially burning the bottom rungs of the career ladder and then wondering why nobody is reaching the top. It creates this experience gap. And as we move further into twenty-twenty-six, we are seeing the rise of what Daniel mentioned: multimodal and agentic AI. This is where things get really spicy. We now have tools like OpenAI's "Operator," which has been out for a year now and is fully integrated into most workflows.
Explain that a bit. We have used those terms before, but for anyone who is just tuning in, what does it mean for a job to be "at risk" from an agentic AI versus just a chatbot?
So, a chatbot talks to you. An agent acts for you. Up until recently, an AI could tell you how to change your flight, but it couldn't necessarily log into the airline's internal legacy database, find a seat, process the payment, and send you the new ticket while also updating your calendar. Agentic AI, like Operator, can do that. It has "agency." It can use tools. It can navigate a computer screen just like a human does.
So it is moving from "giving advice" to "performing the task."
Exactly. And multimodal means it isn't just processing text. It can see your screen, it can hear the tone of your voice, it can look at a spreadsheet and understand the visual layout. When you combine those two, you start looking at back-office administrative roles. Paralegals, insurance adjusters, medical billing specialists. These are jobs that require a lot of "looking at things and moving data from point A to point B." Those are incredibly vulnerable right now.
It feels like we are talking about a significant percentage of the global workforce. So let’s tackle Daniel's question about obligation. Does the AI industry have a responsibility here? Or is this just "creative destruction" in the capitalist sense, like the steam engine or the printing press?
I think the scale and the speed make this different. The industrial revolution took decades to play out. People had time to adapt, even if it was painful. This is happening in months. I personally believe there is a massive societal obligation, but the question is where it sits. Is it the companies like OpenAI and Google? Or is it the governments that collect taxes from these companies?
Well, if a company replaces ten thousand workers with a server farm, they are saving a fortune in payroll taxes, healthcare, and salaries. That money doesn't just disappear; it becomes profit. There has been a lot of talk about an "automation tax," right? The idea that if a robot or an AI takes a job, the company still has to pay a portion of that "saved" tax into a fund for worker retraining.
It is a popular idea, but it is incredibly hard to implement. We saw the debate around the O-B-B-B-A—the One Big Beautiful Bill Act—that passed last year. It touched on tax reform, but it really struggled to define what a "job lost to AI" actually looks like. If a company just doesn't hire for a vacancy because they have more efficient tools, is that a lost job? It is very slippery. But I think we are reaching a point where some kind of Universal Basic Services or a more robust social safety net isn't just a progressive dream, it is a functional necessity for social stability.
And that brings us to the "game plan" Daniel asked about. If we assume the jobs are going away and they aren't coming back in the same form, what do people actually do? Daniel mentioned the idea of a workforce built around "managers of AI systems." Is that realistic? Can everyone just be a "prompt engineer" or an "AI orchestrator"?
I am skeptical that it's a one-for-one swap. The World Economic Forum's twenty-twenty-five report predicted that while AI might displace ninety-two million jobs by twenty-thirty, it could create one hundred and seventy million new ones. That sounds great on paper—a net gain of seventy-eight million. But the new roles are things like "Big Data Specialist" and "AI Orchestrator." Managing AI is a much higher-skill task than the jobs being replaced. It requires a deep understanding of the system's limitations, the ability to spot hallucinations, and the strategic vision to know what the AI should be doing in the first place.
Right, so it doesn't solve the "entry-level" problem. It actually makes the barrier to entry higher. You have to be an expert just to start.
Exactly. I think the "win-win" scenario that some optimists point to is that AI will lower the cost of everything. If healthcare, legal advice, and education become nearly free because they are powered by AI, then maybe we don't need to work forty hours a week just to survive. But that transition period is going to be incredibly rocky. We are talking about millions of people whose identity and livelihood are tied to these roles.
I want to go back to the human element for a second. Daniel mentioned how it feels "disrespectful" when a company you pay a lot of money to won't let you talk to a human. There is a psychological cost to this automation too. We are being funneled into these frictionless, soulless interactions. Do you think there will be a "human-made" or "human-serviced" premium in the future? Like how people pay more for organic food or handmade furniture?
Oh, absolutely. We are already seeing the "Human-in-the-Loop" certification starting to pop up in some industries. The idea that "you will always be able to reach a person within sixty seconds" could become a luxury feature for high-end brands. But for the average consumer, for the person just trying to get their internet fixed or their bank statement clarified, they are going to be stuck with the bots.
So, what's the advice for someone listening who is in one of these "at-risk" industries? If you are a junior developer or you work in a call center, what is the twenty-twenty-six game plan?
I think the most important thing is to move "up the stack." You want to focus on the things AI is still bad at. A recent M-I-T report found that ninety-five percent of generative AI integration attempts actually fail to produce a financial return because they lack the human nuance required for complex problem-solving. AI is great at logic, data, and synthesis. It is still relatively poor at high-stakes empathy, complex physical manipulation, and truly novel cross-disciplinary strategy. If your job is "follow these ten steps to solve a problem," you are in trouble. If your job is "navigate this highly emotional, politically sensitive, unique human situation," you have more runway.
It is about the "uniquely human" traits. But even then, we see these models getting better at "simulating" empathy. They can be programmed to be patient, to use soft language, to never get frustrated. In some ways, they are "better" at customer service than a stressed-out human who has been on the phone for eight hours.
That is the uncomfortable truth. A bot doesn't have a bad day. It doesn't get annoyed when you ask the same question three times. It has infinite patience. So, the "human" advantage has to be about more than just being "nice." It has to be about accountability and genuine connection. If something goes wrong, a bot can't take responsibility. It can't feel the weight of a mistake.
Let’s talk about the "Agentic AI" risk in twenty-twenty-six specifically. We are seeing these agents start to handle things like travel planning, personal finance, and even basic project management. If I am an executive assistant or a travel agent, the walls are closing in. What is the next phase of that?
The next phase is the "Autonomous Enterprise." We are starting to see startups that are basically two founders and a thousand AI agents. They don't hire a marketing team; they deploy a marketing agent. They don't hire an accounting firm; they run an automated financial stack. This is the "scale without mass" phenomenon. It allows for incredible innovation, but it doesn't create jobs in the traditional sense. Watch the S-D-Rs in sales—Sales Development Representatives. That entire entry-level layer is vanishing because agents can research, personalize, and follow up on leads better than a human can.
This feels like a recipe for massive wealth inequality. The people who own the AI get all the rewards of the productivity, and the people who used to do the work are left behind. Daniel asked if society has an obligation. It feels like if we don't address this through policy, we are looking at a very fractured world.
It is the defining challenge of our decade. We have to figure out how to decouple "income" from "traditional labor." If the machines are doing the labor, the value they produce has to be distributed in a way that keeps society functioning. Whether that is a "robot tax," a "citizen's dividend," or just massive investment in new types of "human-centric" jobs like elder care, mental health, and community building.
You know, it is interesting you mention elder care. We always hear that "physical" jobs are safe, but even there, with the progress in robotics we have seen lately, combined with multimodal AI vision, even those "safe" jobs are starting to look a bit more vulnerable in the long term.
True, but the "human touch" in caregiving is much harder to replace than the "human touch" in a customer service chat. I think there is a hierarchy of automation. The "cognitive-routine" jobs go first. Then the "cognitive-non-routine" like law and medicine. Then the "physical-routine" like factory work. And finally, the "physical-non-routine" like nursing or plumbing.
So, if you are a plumber, you are probably fine for a while.
Exactly. If you can fix a leaky pipe in a cramped, unpredictable basement, you are safe for the foreseeable future. Robots still struggle with stairs and wet, slippery environments.
It is a strange world where the "prestigious" office jobs are more at risk than the trades. We have spent decades telling kids to go to college and get an office job to be "safe," and now that might be the most dangerous place to be.
It is a total inversion of the twentieth-century career advice. I think we need to be honest with ourselves that "retraining" isn't a magic bullet either. You can't just take a fifty-year-old customer service veteran and tell them to "become a prompt engineer" or "learn to code." It is not just about skills; it is about temperament and the time it takes to gain mastery.
So, what is the "game plan" then? If retraining isn't the whole answer, and the jobs are going away, what do we actually tell people?
I think the game plan has to be three-pronged. One: we need to embrace the efficiency to lower the cost of living for everyone. If AI makes food, energy, and housing cheaper, that is a huge win. Two: we need a massive shift in how we fund the social safety net, moving away from payroll taxes and toward capital or automation taxes. And three: we need to re-value "human work" that isn't about productivity. Art, community service, caregiving, education. These things should be highly compensated precisely because they can't be done by a machine.
That requires a huge cultural shift. We are so used to valuing people based on their "economic output." If a machine can output more than you, does that mean you are worth less? In a purely capitalist sense, yes. In a human sense, no.
Exactly. We have to move from a "work-centric" society to a "purpose-centric" society. And that is a terrifying transition because we don't have a blueprint for it.
Let’s look at the "AI Manager" idea one more time. Daniel asked if it is realistic to think we can have a workforce built around that. I am thinking about the "manager" of an AI customer service team. Instead of managing fifty humans, they manage one giant model and a few specialized agents. Their job is to look at the edge cases, the things the AI couldn't solve, and the "disgruntled" customers who demand a human. That sounds like a very stressful, high-intensity job. You only ever deal with the hardest, most annoying problems.
That is the "Filter Problem." Humans become the filters for everything the AI can't handle. It means your entire workday is spent dealing with the five percent of cases that are absolute nightmares. You lose the "easy wins" that make a job sustainable. It leads to massive burnout. We are already seeing this in content moderation. The AI filters out the easy stuff, and the humans are left looking at the most horrific content on the internet all day.
That is a grim reality. You are basically taking the "soul-crushing" part of the job and making it the entire job.
Right. So, when people say "AI will free us from drudgery," we have to ask: what is left? If the "drudgery" was the easy, repetitive stuff that gave your brain a break, and the "new" job is just constant high-stakes crisis management, is that actually an improvement in quality of life?
I think this is why we are seeing so much pushback. It isn't just about the paycheck; it is about the "vibe" of work. The feeling that you are just a "handler" for a machine rather than a craftsman or a helper.
And I think that is where the "AI industry obligation" comes in. Companies like OpenAI and Anthropic shouldn't just be focused on making the models "smarter." They should be looking at "human-centric design." How can these tools be built to augment a human's day rather than just replacing the easy parts and leaving the human with the scraps?
Is anyone actually doing that? Or is the market pressure for "efficiency" just too strong?
There are some interesting experiments. Some companies are using AI to "pre-fill" information for a human agent, so the human can focus on the conversation rather than the data entry. That feels like augmentation. But as soon as the AI gets good enough to do the conversation too, the temptation to cut the human out entirely is almost impossible for a CEO to resist.
It always comes back to the bottom line. Daniel’s prompt really touched a nerve because we see it happening every day now. I mean, even in our own lives. I used to spend hours researching certain topics for my work, and now I can get a summary in ten seconds. I am "more productive," but I also feel like I am losing that "deep dive" experience that used to make me an expert.
We are all becoming "editors" rather than "creators." We edit the AI's code, we edit the AI's writing, we edit the AI's research. It is a different kind of mental labor. It is broader, but shallower.
So, as we look toward the rest of twenty-twenty-six, what are the specific roles you think are going to be the "canary in the coal mine" for this next wave of agentic AI?
Watch the S-D-Rs in sales, as I mentioned, but also watch the "Tier One" creative jobs. Stock photography is basically dead. Basic jingle writing for commercials is on life support. If you need a "generic" version of something creative, the AI has you covered. The "Tier One" of creativity is "I need something that looks professional and fits this mood." That is now a solved problem. The "Tier Two" is "I need something that changes the culture." That still needs a human.
But again, how do you get to "Tier Two" if you can't get a job doing "Tier One"? It is the same ladder problem.
Exactly. We might end up with a "lost generation" of creatives who never got their start because the "entry-level" work was automated. This is why I think we need to rethink education entirely. We shouldn't be teaching kids to do "Tier One" tasks. We should be starting them at "Tier Two" thinking from day one. But that is a huge ask for our current school systems.
It feels like we are talking about a total restructuring of how we live, work, and learn. It is a lot to take in. Daniel really went for the jugular with this one.
He did. And I don't think there are easy answers. But I do think that being aware of it is the first step. We can't pretend it isn't happening. We have to look at these displacements and say, "Okay, this person lost their job. What is the systemic way we support them, and how do we ensure the next generation has a path forward?"
And I think that "accountability" piece is huge. If you are a company that is laying off thousands of people because of AI, you should be expected to contribute to a transition fund. It shouldn't just be "privatize the gains and socialize the losses."
That is the phrase. "Socialize the losses." When people lose their jobs, the "loss" is felt by the community, the family, and the government that has to provide support. If the "gain" is only felt by the shareholders, the system eventually breaks.
Well, on that slightly heavy but very necessary note, I think we have covered a lot of ground. We have looked at the "Tier One" collapse, the rise of agentic and multimodal AI in twenty-twenty-six, the "ladder problem" for new workers, and the potential for a "human-premium" in the future.
It is a lot to chew on. And I think it is important for our listeners to think about their own "game plan." How are you moving "up the stack"? What are you doing that a machine can't simulate?
Exactly. And hey, if you have thoughts on this, or if you are someone who has been directly impacted by AI automation, we want to hear from you. You can get in touch through the contact form at myweirdprompts.com. We really value the perspective of our listeners on these "real-world" implications.
We really do. And if you have been enjoying the show, we would appreciate it if you could leave us a review on your podcast app or on Spotify. It helps us reach more people and keeps the conversation going.
Definitely. It makes a big difference. Alright, I think that is a wrap for episode three hundred and eighty-nine. Thanks to Daniel for sending this in and for being a great housemate, even if he does make us think about the end of the world on a Tuesday afternoon.
It is what he does best.
Thanks for listening to My Weird Prompts. You can find us on Spotify and at myweirdprompts.com. We will be back soon with another prompt.
Take care of yourselves out there. And maybe go talk to a human today, just for the sake of it.
Good advice. See you next time.