#599: The Invisible AI: Decades of Innovation Before ChatGPT

AI didn't start in 2022. Discover the 70-year history of the "invisible" systems that have been quietly running our world for decades.

0:000:00
Episode Details
Published
Duration
24:45
Audio
Direct link
Pipeline
V4
TTS Engine
LLM

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

The Myth of the Overnight Success

To the average observer, artificial intelligence appeared like a bolt from the blue in November 2022. The sudden ubiquity of conversational interfaces like ChatGPT led many to believe that AI was a brand-new invention. However, as Herman and Corn discuss in their latest episode, this "overnight success" was actually seventy years in the making. While the public is currently enamored with chatbots that can write poetry or summarize emails, the true backbone of AI has been humming along in the background of our civilization for decades, managing everything from medical diagnoses to national security.

The Era of Expert Systems

The journey into non-conversational AI began long before the era of massive compute power and neural networks. Herman points out that the 1960s and 70s were defined by "expert systems." These were not systems that learned from data in the modern sense; rather, they were intricate webs of "if-then" logic provided by human specialists.

A prime example discussed is MYCIN, developed at Stanford in the early 1970s. Designed to identify blood infections and recommend antibiotic dosages, MYCIN operated on a knowledge base of roughly 600 rules. Remarkably, in head-to-head trials, the system often outperformed human infectious disease experts. Despite its clinical success, Herman notes that MYCIN never became a household name due to the ethical, legal, and user-interface hurdles of the time. Nevertheless, it proved that AI could handle high-stakes decision-making half a century ago, paving the way for the medical imaging and anomaly detection used in hospitals today.

Cold War Origins and Remote Sensing

The conversation then shifts to the military and defense sectors, which Corn identifies as some of the earliest and most consistent adopters of AI. During the Cold War, the challenge wasn't generating text, but processing massive amounts of noisy signal data. Herman highlights the SOSUS (Sound Surveillance System), which used early algorithmic patterns to filter out the "noise" of the ocean—whale songs and tectonic shifts—to locate the specific acoustic signatures of Soviet submarines.

This era also saw the birth of automated remote sensing. With the launch of the Landsat satellite in 1972, the United States was suddenly flooded with more geographical data than any human team could analyze. The solution was the development of automated systems to classify land cover, distinguishing between forests, wheat fields, and urban centers. This early "anomaly detection" is the direct ancestor of the sophisticated satellite AI we use today to monitor climate change and crop yields.

The Statistical Revolution of the 1990s

While the general public experienced what historians call the "AI Winter," the industrial world was quietly doubling down on statistics. In the 1990s, the focus shifted from rigid expert systems to machine learning models like support vector machines and random forests.

Herman and Corn explore how these models became the "plumbing" of global trade. The Automated Targeting System (ATS), used by customs and border protection since the late 1990s, is a perfect example. By analyzing millions of data points on shipping manifests, the AI can flag a container as suspicious based on subtle deviations in weight, origin, or value. Similarly, the financial sector revolutionized fraud detection during this time. FICO’s Falcon system, introduced in 1992, allowed neural networks to evaluate transactions in milliseconds, killing fraudulent credit card charges before the customer even realized there was a problem.

Sorting Mail and the Roots of Modern Vision

Perhaps the most surprising insight from the discussion is the role of the United States Postal Service in AI history. In the late 1980s, Yann LeCun developed LeNet-5, a neural network designed specifically to recognize handwritten zip codes. By the late 90s, this AI was processing a significant percentage of all checks and mail in the U.S.

Corn observes that the very technology that eventually led to modern computer vision and the architecture of Large Language Models (LLMs) was essentially perfected while sorting envelopes. This reframes the narrative of AI development: the technology didn't necessarily get "better" overnight; it simply became more visible as it moved from specialized tasks like handwriting recognition to general tasks like conversation.

The Convergence: From Chatbots back to Science

The episode concludes with a look at how the "visible" AI of today is being looped back into the "invisible" sectors of the past. The transformer architecture that powers ChatGPT is now being applied to massive scientific challenges. The standout example is DeepMind’s AlphaFold, which has effectively solved the 50-year-old biological puzzle of protein folding.

Herman and Corn argue that while the hype remains centered on talking bots, the most significant economic and environmental impacts are happening in fields like agriculture. They cite the "See and Spray" technology in modern tractors, which uses AI to identify and target individual weeds in real-time, reducing herbicide use by up to 90%.

Ultimately, the discussion serves as a reminder that AI is not just a tool for conversation, but a fundamental layer of modern infrastructure. Whether it is predicting a turbine failure through a "digital twin" or sorting a letter across the country, AI has been our silent partner for decades, and its most important work is often the work we never see.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

Read Full Transcript

Episode #599: The Invisible AI: Decades of Innovation Before ChatGPT

Daniel Daniel's Prompt
Daniel
"I’d love to explore how AI has been deployed in non-conversational use cases for much longer than the recent surge in consumer AI. We've talked about its role in anomaly detection in satellite imagery, soil analysis, customs processing, medical imaging, and even military defense. How long has AI been used in these areas, and who were the earliest serious adopters of the technology long before tools like ChatGPT became household names?"
Corn
You know, Herman, sometimes I feel like the world thinks artificial intelligence was born in November two thousand twenty-two. Like it just suddenly materialized the moment a chat box appeared on our screens.
Herman
Herman Poppleberry, at your service. And you are so right, Corn. It is the classic overnight success story that actually took seventy years of hard labor in the shadows. Our housemate Daniel sent us a great prompt today that really gets at the heart of this. He is looking past the conversational tools like ChatGPT and Gemini to the stuff that has been humming along in the background for decades. Things like satellite imagery, medical scans, and customs processing.
Corn
It is a fascinating angle because those applications are often far more critical than writing a poem about a toaster. If a chatbot hallucinates, it is funny. If a satellite anomaly detection system misses a missile launch or a medical AI misses a tumor, that is a different story entirely. So, let's dig into this. How long has this invisible AI really been part of the infrastructure? Because Daniel is right, it feels like it has been there much longer than the public realizes.
Herman
Oh, much longer. If we are looking for the earliest serious adopters, we have to go back to the nineteen sixties and seventies. Long before we had the compute power for neural networks, we had what were called expert systems. These were the first real non-conversational deployments of AI. They did not learn from data the way modern systems do; instead, they were built on massive trees of if-then logic provided by human experts.
Corn
Right, and I think the medical field was one of the first places where this really took hold. I remember reading about a system called MYCIN. That was back in the mid-seventies, right?
Herman
That's right. MYCIN was developed at Stanford in the early nineteen seventies. It was designed to identify bacteria causing severe infections like bacteremia and meningitis and then recommend the correct dosage of antibiotics. It was incredible for its time. It had a knowledge base of about six hundred rules. And here is the kicker: in head-to-head tests, MYCIN's performance was often as good as, or even better than, infectious disease experts.
Corn
That is wild. We are talking fifty years ago. But why didn't it become a household name? If it was outperforming doctors in the seventies, why wasn't every hospital using it?
Herman
That is the big question. It mostly came down to legal and ethical hurdles, plus the user interface was clunky. But the logic was sound. It showed that AI could be used for high-stakes decision-making long before it could hold a conversation. And that set the stage for the medical imaging AI we see today. Those systems that scan X-rays or MRI results for anomalies are the direct descendants of that early work, though the underlying tech has shifted from rules to deep learning. By the early two thousands, computer-aided detection, or CAD, was already standard in mammography.
Corn
It seems like the military was another very early adopter, which Daniel mentioned in his prompt. When we talk about defense, we are often talking about signal processing and pattern recognition. I imagine that goes back even further?
Herman
It does. If you look at the history of DARPA, the Defense Advanced Research Projects Agency, they were funding AI research as early as the nineteen sixties. One of the earliest non-conversational uses was in underwater acoustics for submarine detection, specifically the SOSUS system. Imagine trying to find a silent submarine in a massive, noisy ocean. That is the ultimate needle-in-a-haystack problem. They were using early algorithmic patterns to filter out the whale songs and tectonic shifts to find the specific acoustic signature of a Soviet sub.
Corn
So it was essentially anomaly detection before we called it that. It is interesting how the terminology has shifted. Back then, they might have called it advanced signal processing or automated target recognition. Today, we would just call it AI.
Herman
Precisely. The branding has changed more than the goal. Another huge one was satellite imagery. The United States launched the first Landsat satellite in nineteen seventy-two. Suddenly, we had more data about the Earth's surface than humans could possibly look at. You can't have a person staring at every square kilometer of soil or forest to check for changes. So, they developed automated systems to classify land cover. Is this a forest? Is this a wheat field? Is this an urban area? That was the birth of automated remote sensing.
Corn
And that leads directly into what Daniel was asking about soil analysis. If you can tell the difference between wheat and corn from space in the seventies, it is only a small leap to using those same spectral signatures to determine if the soil is too dry or lacks nitrogen. But I am curious about the transition. We went from these rigid if-then expert systems to the machine learning revolution. When did that actually hit the industrial sectors?
Herman
It started creeping in during the nineteen nineties. While the general public was experiencing the AI winter, where interest in general AI stalled, the industrial world was doubling down on statistics. This is when we moved from expert systems to things like support vector machines and random forests. These are types of machine learning that are incredibly good at classification and regression but don't need to be conversational at all.
Corn
This is where the customs processing example fits in, right? Looking for suspicious patterns in shipping manifests or X-raying containers at ports.
Herman
That's it. Customs and border protection agencies globally were early adopters of these statistical models. In the United States, the Automated Targeting System, or ATS, has been used since the late nineties. They have millions of data points on what a normal shipment looks like. If a container from a specific port, with a specific weight, and a specific declared value suddenly shows up with a slightly different signature, the AI flags it for a human to inspect. This has been standard practice for over twenty-five years. It is a massive non-conversational use case that keeps global trade moving.
Corn
It is funny because most people don't realize they are interacting with AI when they walk through an airport or order something from overseas. It is just part of the plumbing. I think that is a key distinction. Conversational AI is a tool we use, but these other AI systems are tools that use us, or at least use our data, to manage the world around us.
Herman
That is a great way to put it, Corn. It is the invisible infrastructure. And let's talk about the energy sector. Predictive maintenance is another one that has been around for a long time. If you are running a power plant or a massive wind farm, you don't want to wait for a turbine to explode to fix it. Companies like General Electric and Siemens have been using sensor data and machine learning for decades to predict when a part is likely to fail. They call it a digital twin. It is a non-conversational AI that lives inside the machine, constantly running simulations to check for wear and tear.
Corn
I remember we touched on digital twins briefly in an earlier episode, but the scale of it now is just immense. It makes me think about the financial sector too. Credit card companies have been using AI for fraud detection since what, the early nineties?
Herman
At least. FICO introduced its Falcon fraud assessment system in nineteen ninety-two. If you buy a coffee in Jerusalem and then ten minutes later someone tries to buy a diamond necklace in New York with your card, an AI system kills that transaction in milliseconds. That isn't a human making a call. It is a neural network or a gradient boosting model that has been trained on billions of transactions. They were some of the first to really prove that AI could work at massive scale in real-time.
Corn
So, if we look back at Daniel's question about who the earliest serious adopters were, it sounds like it was a quartet of medicine, military, finance, and logistics. They had the most to gain and, crucially, they had the most data and the most money to throw at the problem.
Herman
Spot on. And I would add one more: the postal service. People forget that handwriting recognition was one of the first great triumphs of neural networks. Yann LeCun developed a system called LeNet five in the late eighties. It was specifically designed to read handwritten zip codes on envelopes for the United States Postal Service.
Corn
Wait, so the technology that eventually led to modern computer vision and even parts of the architecture for LLMs was originally used to sort mail?
Herman
Yes! By the late nineties, it was estimated that this AI was reading over ten percent of all checks in the United States. It was a massive, successful deployment of a neural network long before anyone thought AI was cool again. It was just an efficient way to sort mail and process checks.
Corn
This really reframes the whole narrative. It suggests that AI didn't just suddenly get good; it got visible. The underlying math and the utility were proven decades ago in these very specific, high-value niches. But Herman, why do you think it took so long for it to jump from sorting mail and detecting submarines to talking to us?
Herman
Language is just incredibly messy, Corn. Identifying a zip code is a constrained problem. There are only ten digits. Identifying a tumor is a classification problem. But language is generative and infinite. It required a massive jump in compute power and a shift in architecture, specifically the transformer model, to handle the sheer complexity of human interaction. But the non-conversational stuff? That was the training ground. It taught us how to build data pipelines, how to handle noise, and how to scale these models.
Corn
It is also worth noting that these early systems were much more specialized. A system that detects submarines can't sort mail. But today, we are seeing a convergence. The same type of transformer architecture used for ChatGPT is now being applied back to those non-conversational fields. Like, look at what is happening with protein folding and AlphaFold.
Herman
AlphaFold is the gold standard here. We are now on AlphaFold three, which has moved beyond just proteins to predicting the structures of DNA, RNA, and even how drugs interact with them. For fifty years, biologists struggled with the protein folding problem. It was a problem of astronomical complexity. And then DeepMind comes along and essentially solves it. That is perhaps the most significant non-conversational AI breakthrough of our lifetime.
Corn
And that is a perfect example of what Daniel was talking about. It isn't a chatbot. You don't ask AlphaFold how it is feeling today. You feed it a sequence and it gives you a structure. It is a functional, industrial tool.
Herman
Right. And I think we are going to see a lot more of that. The hype is around the talking bots, but the real economic and scientific impact is happening in these silent sectors. Take agriculture. We are seeing AI integrated into tractors, like the John Deere See and Spray systems, that can identify and spray individual weeds while leaving the crops untouched. This reduces herbicide use by up to ninety percent. That is a massive win for the environment and the farmer's bottom line.
Corn
It feels like we are moving back toward specialization in a way. We had the era of these massive general models, but now people are realizing that if you want to do something really well, like analyze soil or detect cracks in a bridge, you need a model that is purpose-built for that data.
Herman
I agree, though they are often built on the foundations of those general models. It is called fine-tuning. You take a model that understands the world generally and then you teach it to be a master of satellite imagery or medical scans. But here is something most people don't realize about these non-conversational systems: they are often much more reliable than the chatbots because their output is measurable.
Corn
That is a great point. You can verify if a protein is folded correctly or if a weed was actually sprayed. You can't always verify if an LLM's summary of a historical event is perfectly accurate without doing the work yourself.
Herman
Precisely. In industrial AI, the ground truth is usually clear. That is why these sectors were able to adopt it so much earlier. If the model didn't work, it was obvious immediately. In conversation, you can fake it. In customs processing, you can't.
Corn
So, let's look at the second-order effects of this. If AI has been in our infrastructure for forty years, what does that mean for how we should view the current surge? Is it just a continuation, or is it a genuine pivot?
Herman
I think it is both. It is a continuation in terms of our ability to process data, but it is a pivot in terms of accessibility. Before, you needed a PhD and a massive budget from DARPA or a big bank to use AI. Now, a developer in their garage can use the same underlying tech to build a tool for local farmers. That democratization is the real shift.
Corn
And that brings us to the ethics of it. When AI was invisible, we didn't worry about it as much. It was just the algorithm that sorted our mail. But now that it has a voice, we are projecting all our fears onto it. Yet, the invisible AI is arguably making more important decisions about our lives every day. It is deciding who gets a loan, who gets pulled aside at the border, and even how our power grid is balanced.
Herman
You hit on something really important there, Corn. There is a concept called algorithmic bias that researchers have been shouting about for decades in these non-conversational fields. If the data used to train a credit scoring AI is biased, the AI will be biased. This has been a known issue in the finance and legal sectors for a long time. The current conversation about AI safety and ethics is finally catching up to what the specialists in these niche fields have been dealing with since the nineties.
Corn
It is almost like the conversational AI acted as a giant billboard for the problems that were already there. It forced us to look at the technology because it was finally looking back at us.
Herman
That is a brilliant way to put it. It made the invisible visible. Now, when we talk about AI in military defense, people are more aware of the implications because they have seen what a chatbot can do. They can extrapolate that power to a drone or a missile defense system.
Corn
Let's talk about some of the more obscure early adopters. I was reading about the use of AI in the oil and gas industry. Apparently, they were using neural networks in the late eighties for seismic interpretation.
Herman
Yes, that is another big one. If you are drilling a hole that costs a hundred million dollars, you really want to be sure there is oil down there. They use sound waves to map the underground rock formations. Interpreting those squiggly lines on a screen is incredibly difficult. They started using early pattern recognition to identify salt domes and other geological features that indicate oil traps. It was a massive competitive advantage. If your AI was five percent better at interpreting seismic data, you saved billions.
Corn
It always comes back to the money, doesn't it? The sectors with the highest stakes and the most expensive mistakes are always the first to innovate. Which makes me wonder about the areas that haven't adopted AI yet. If it has been around for so long, why are some sectors still so manual?
Herman
Often it is because the data is trapped in physical formats. Think about the legal profession or local government. They have mountains of data, but it is often on paper or in fragmented digital systems. The non-conversational AI needs clean, structured data to work. That is why satellite imagery and finance were so early; their data was already digital and relatively organized.
Corn
That makes sense. So, the bottleneck isn't the AI itself; it is the data infrastructure. Which means as we digitize more of the world, we are going to see these old AI techniques suddenly appearing in new places.
Herman
You bet. We are seeing it in construction now. Using computer vision to track the progress of a building site and compare it to the blueprints in real-time. That is an application of the same tech used for satellite imagery, just at a different scale. Or in waste management, using AI to sort recycling on a conveyor belt. That is the descendant of the postal service's mail sorting.
Corn
It is like the technology is finally catching up to the physical world. We have had the brains for a while, but we are just now getting the eyes and the hands in place.
Herman
And that brings us to the future. If we look at what Daniel was asking, the trajectory is clear. We started with experts codifying their knowledge into rules. Then we moved to machines finding their own rules in data. And now we are at the point where these machines can explain their reasoning to us, or at least interact with us. But the core mission remains the same: finding patterns that are too complex or too vast for the human brain to process alone.
Corn
I think the takeaway for our listeners is that they shouldn't fear AI as some new, alien force. It has been a silent partner in our civilization for half a century. It has been keeping our planes in the air, our power grids stable, and our financial systems secure. The fact that it can now talk to us is just the latest chapter, not the whole book.
Herman
That's it. And honestly, I find a lot of comfort in that. These systems have been tested in the real world for a long time. They aren't just theoretical. When you see an AI being used for medical imaging today, it is backed by decades of research into computer vision and pattern recognition. It is not just some random guess by a chatbot.
Corn
It is a solid foundation. And hey, if you are finding this deep dive into the hidden history of AI interesting, we would really appreciate it if you could leave us a review on your podcast app or Spotify. It genuinely helps other people find the show and join our weird little community.
Herman
It really does. We love seeing those reviews pop up. It makes all the late-night research sessions worth it.
Corn
So, looking ahead, what do you think is the next big non-conversational frontier? We have done space, we have done medicine, we have done finance. What is left?
Herman
I think it is the climate. We are starting to see AI used for incredibly complex climate modeling. NVIDIA has a project called Earth-two, which is a digital twin of the entire planet's atmosphere. They use it to predict extreme weather events with much higher resolution than traditional models. Or using AI to design new materials for solid-state batteries. These are problems with thousands of variables that are perfect for the kind of non-conversational AI we have been talking about.
Corn
It is the ultimate optimization problem. If we can use AI to find oil, we can certainly use it to find the best way to move away from it.
Herman
That is the hope. And it is a powerful one. The same tools that were forged in the Cold War to detect submarines might end up being the tools that help us save the planet. There is a certain poetic symmetry to that.
Corn
It really is. It shows that technology is neutral; it is all about how we deploy it. From sorting mail to saving lives, AI has been on a long journey. And we are just getting started with the most interesting parts.
Herman
Well said, Corn. I think we have given Daniel plenty to chew on. It is a reminder that to understand where we are going, we really have to look at where we have been. The history of AI is much deeper and much more industrial than the current headlines suggest.
Corn
Definitely. Thanks to Daniel for sending that in. It is always good to step back from the hype and look at the real engineering that makes our world work.
Herman
It has been a blast digging into this with you.
Corn
Same here, Herman. And to everyone listening, thanks for joining us on this episode of My Weird Prompts. You can find all our past episodes, including our earlier discussions on AI and remote sensing, at myweirdprompts dot com. We also have an RSS feed there if you want to subscribe.
Herman
And if you have your own weird prompt or a question you want us to explore, there is a contact form on the website. We love hearing from you.
Corn
Until next time, I am Corn.
Herman
And I am Herman Poppleberry.
Corn
Thanks for listening. Goodbye!
Herman
Goodbye!

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.