You know, Herman, sometimes I feel like the world thinks artificial intelligence was born in November two thousand twenty-two. Like it just suddenly materialized the moment a chat box appeared on our screens.
Herman Poppleberry, at your service. And you are so right, Corn. It is the classic overnight success story that actually took seventy years of hard labor in the shadows. Our housemate Daniel sent us a great prompt today that really gets at the heart of this. He is looking past the conversational tools like ChatGPT and Gemini to the stuff that has been humming along in the background for decades. Things like satellite imagery, medical scans, and customs processing.
It is a fascinating angle because those applications are often far more critical than writing a poem about a toaster. If a chatbot hallucinates, it is funny. If a satellite anomaly detection system misses a missile launch or a medical AI misses a tumor, that is a different story entirely. So, let's dig into this. How long has this invisible AI really been part of the infrastructure? Because Daniel is right, it feels like it has been there much longer than the public realizes.
Oh, much longer. If we are looking for the earliest serious adopters, we have to go back to the nineteen sixties and seventies. Long before we had the compute power for neural networks, we had what were called expert systems. These were the first real non-conversational deployments of AI. They did not learn from data the way modern systems do; instead, they were built on massive trees of if-then logic provided by human experts.
Right, and I think the medical field was one of the first places where this really took hold. I remember reading about a system called MYCIN. That was back in the mid-seventies, right?
That's right. MYCIN was developed at Stanford in the early nineteen seventies. It was designed to identify bacteria causing severe infections like bacteremia and meningitis and then recommend the correct dosage of antibiotics. It was incredible for its time. It had a knowledge base of about six hundred rules. And here is the kicker: in head-to-head tests, MYCIN's performance was often as good as, or even better than, infectious disease experts.
That is wild. We are talking fifty years ago. But why didn't it become a household name? If it was outperforming doctors in the seventies, why wasn't every hospital using it?
That is the big question. It mostly came down to legal and ethical hurdles, plus the user interface was clunky. But the logic was sound. It showed that AI could be used for high-stakes decision-making long before it could hold a conversation. And that set the stage for the medical imaging AI we see today. Those systems that scan X-rays or MRI results for anomalies are the direct descendants of that early work, though the underlying tech has shifted from rules to deep learning. By the early two thousands, computer-aided detection, or CAD, was already standard in mammography.
It seems like the military was another very early adopter, which Daniel mentioned in his prompt. When we talk about defense, we are often talking about signal processing and pattern recognition. I imagine that goes back even further?
It does. If you look at the history of DARPA, the Defense Advanced Research Projects Agency, they were funding AI research as early as the nineteen sixties. One of the earliest non-conversational uses was in underwater acoustics for submarine detection, specifically the SOSUS system. Imagine trying to find a silent submarine in a massive, noisy ocean. That is the ultimate needle-in-a-haystack problem. They were using early algorithmic patterns to filter out the whale songs and tectonic shifts to find the specific acoustic signature of a Soviet sub.
So it was essentially anomaly detection before we called it that. It is interesting how the terminology has shifted. Back then, they might have called it advanced signal processing or automated target recognition. Today, we would just call it AI.
Precisely. The branding has changed more than the goal. Another huge one was satellite imagery. The United States launched the first Landsat satellite in nineteen seventy-two. Suddenly, we had more data about the Earth's surface than humans could possibly look at. You can't have a person staring at every square kilometer of soil or forest to check for changes. So, they developed automated systems to classify land cover. Is this a forest? Is this a wheat field? Is this an urban area? That was the birth of automated remote sensing.
And that leads directly into what Daniel was asking about soil analysis. If you can tell the difference between wheat and corn from space in the seventies, it is only a small leap to using those same spectral signatures to determine if the soil is too dry or lacks nitrogen. But I am curious about the transition. We went from these rigid if-then expert systems to the machine learning revolution. When did that actually hit the industrial sectors?
It started creeping in during the nineteen nineties. While the general public was experiencing the AI winter, where interest in general AI stalled, the industrial world was doubling down on statistics. This is when we moved from expert systems to things like support vector machines and random forests. These are types of machine learning that are incredibly good at classification and regression but don't need to be conversational at all.
This is where the customs processing example fits in, right? Looking for suspicious patterns in shipping manifests or X-raying containers at ports.
That's it. Customs and border protection agencies globally were early adopters of these statistical models. In the United States, the Automated Targeting System, or ATS, has been used since the late nineties. They have millions of data points on what a normal shipment looks like. If a container from a specific port, with a specific weight, and a specific declared value suddenly shows up with a slightly different signature, the AI flags it for a human to inspect. This has been standard practice for over twenty-five years. It is a massive non-conversational use case that keeps global trade moving.
It is funny because most people don't realize they are interacting with AI when they walk through an airport or order something from overseas. It is just part of the plumbing. I think that is a key distinction. Conversational AI is a tool we use, but these other AI systems are tools that use us, or at least use our data, to manage the world around us.
That is a great way to put it, Corn. It is the invisible infrastructure. And let's talk about the energy sector. Predictive maintenance is another one that has been around for a long time. If you are running a power plant or a massive wind farm, you don't want to wait for a turbine to explode to fix it. Companies like General Electric and Siemens have been using sensor data and machine learning for decades to predict when a part is likely to fail. They call it a digital twin. It is a non-conversational AI that lives inside the machine, constantly running simulations to check for wear and tear.
I remember we touched on digital twins briefly in an earlier episode, but the scale of it now is just immense. It makes me think about the financial sector too. Credit card companies have been using AI for fraud detection since what, the early nineties?
At least. FICO introduced its Falcon fraud assessment system in nineteen ninety-two. If you buy a coffee in Jerusalem and then ten minutes later someone tries to buy a diamond necklace in New York with your card, an AI system kills that transaction in milliseconds. That isn't a human making a call. It is a neural network or a gradient boosting model that has been trained on billions of transactions. They were some of the first to really prove that AI could work at massive scale in real-time.
So, if we look back at Daniel's question about who the earliest serious adopters were, it sounds like it was a quartet of medicine, military, finance, and logistics. They had the most to gain and, crucially, they had the most data and the most money to throw at the problem.
Spot on. And I would add one more: the postal service. People forget that handwriting recognition was one of the first great triumphs of neural networks. Yann LeCun developed a system called LeNet five in the late eighties. It was specifically designed to read handwritten zip codes on envelopes for the United States Postal Service.
Wait, so the technology that eventually led to modern computer vision and even parts of the architecture for LLMs was originally used to sort mail?
Yes! By the late nineties, it was estimated that this AI was reading over ten percent of all checks in the United States. It was a massive, successful deployment of a neural network long before anyone thought AI was cool again. It was just an efficient way to sort mail and process checks.
This really reframes the whole narrative. It suggests that AI didn't just suddenly get good; it got visible. The underlying math and the utility were proven decades ago in these very specific, high-value niches. But Herman, why do you think it took so long for it to jump from sorting mail and detecting submarines to talking to us?
Language is just incredibly messy, Corn. Identifying a zip code is a constrained problem. There are only ten digits. Identifying a tumor is a classification problem. But language is generative and infinite. It required a massive jump in compute power and a shift in architecture, specifically the transformer model, to handle the sheer complexity of human interaction. But the non-conversational stuff? That was the training ground. It taught us how to build data pipelines, how to handle noise, and how to scale these models.
It is also worth noting that these early systems were much more specialized. A system that detects submarines can't sort mail. But today, we are seeing a convergence. The same type of transformer architecture used for ChatGPT is now being applied back to those non-conversational fields. Like, look at what is happening with protein folding and AlphaFold.
AlphaFold is the gold standard here. We are now on AlphaFold three, which has moved beyond just proteins to predicting the structures of DNA, RNA, and even how drugs interact with them. For fifty years, biologists struggled with the protein folding problem. It was a problem of astronomical complexity. And then DeepMind comes along and essentially solves it. That is perhaps the most significant non-conversational AI breakthrough of our lifetime.
And that is a perfect example of what Daniel was talking about. It isn't a chatbot. You don't ask AlphaFold how it is feeling today. You feed it a sequence and it gives you a structure. It is a functional, industrial tool.
Right. And I think we are going to see a lot more of that. The hype is around the talking bots, but the real economic and scientific impact is happening in these silent sectors. Take agriculture. We are seeing AI integrated into tractors, like the John Deere See and Spray systems, that can identify and spray individual weeds while leaving the crops untouched. This reduces herbicide use by up to ninety percent. That is a massive win for the environment and the farmer's bottom line.
It feels like we are moving back toward specialization in a way. We had the era of these massive general models, but now people are realizing that if you want to do something really well, like analyze soil or detect cracks in a bridge, you need a model that is purpose-built for that data.
I agree, though they are often built on the foundations of those general models. It is called fine-tuning. You take a model that understands the world generally and then you teach it to be a master of satellite imagery or medical scans. But here is something most people don't realize about these non-conversational systems: they are often much more reliable than the chatbots because their output is measurable.
That is a great point. You can verify if a protein is folded correctly or if a weed was actually sprayed. You can't always verify if an LLM's summary of a historical event is perfectly accurate without doing the work yourself.
Precisely. In industrial AI, the ground truth is usually clear. That is why these sectors were able to adopt it so much earlier. If the model didn't work, it was obvious immediately. In conversation, you can fake it. In customs processing, you can't.
So, let's look at the second-order effects of this. If AI has been in our infrastructure for forty years, what does that mean for how we should view the current surge? Is it just a continuation, or is it a genuine pivot?
I think it is both. It is a continuation in terms of our ability to process data, but it is a pivot in terms of accessibility. Before, you needed a PhD and a massive budget from DARPA or a big bank to use AI. Now, a developer in their garage can use the same underlying tech to build a tool for local farmers. That democratization is the real shift.
And that brings us to the ethics of it. When AI was invisible, we didn't worry about it as much. It was just the algorithm that sorted our mail. But now that it has a voice, we are projecting all our fears onto it. Yet, the invisible AI is arguably making more important decisions about our lives every day. It is deciding who gets a loan, who gets pulled aside at the border, and even how our power grid is balanced.
You hit on something really important there, Corn. There is a concept called algorithmic bias that researchers have been shouting about for decades in these non-conversational fields. If the data used to train a credit scoring AI is biased, the AI will be biased. This has been a known issue in the finance and legal sectors for a long time. The current conversation about AI safety and ethics is finally catching up to what the specialists in these niche fields have been dealing with since the nineties.
It is almost like the conversational AI acted as a giant billboard for the problems that were already there. It forced us to look at the technology because it was finally looking back at us.
That is a brilliant way to put it. It made the invisible visible. Now, when we talk about AI in military defense, people are more aware of the implications because they have seen what a chatbot can do. They can extrapolate that power to a drone or a missile defense system.
Let's talk about some of the more obscure early adopters. I was reading about the use of AI in the oil and gas industry. Apparently, they were using neural networks in the late eighties for seismic interpretation.
Yes, that is another big one. If you are drilling a hole that costs a hundred million dollars, you really want to be sure there is oil down there. They use sound waves to map the underground rock formations. Interpreting those squiggly lines on a screen is incredibly difficult. They started using early pattern recognition to identify salt domes and other geological features that indicate oil traps. It was a massive competitive advantage. If your AI was five percent better at interpreting seismic data, you saved billions.
It always comes back to the money, doesn't it? The sectors with the highest stakes and the most expensive mistakes are always the first to innovate. Which makes me wonder about the areas that haven't adopted AI yet. If it has been around for so long, why are some sectors still so manual?
Often it is because the data is trapped in physical formats. Think about the legal profession or local government. They have mountains of data, but it is often on paper or in fragmented digital systems. The non-conversational AI needs clean, structured data to work. That is why satellite imagery and finance were so early; their data was already digital and relatively organized.
That makes sense. So, the bottleneck isn't the AI itself; it is the data infrastructure. Which means as we digitize more of the world, we are going to see these old AI techniques suddenly appearing in new places.
You bet. We are seeing it in construction now. Using computer vision to track the progress of a building site and compare it to the blueprints in real-time. That is an application of the same tech used for satellite imagery, just at a different scale. Or in waste management, using AI to sort recycling on a conveyor belt. That is the descendant of the postal service's mail sorting.
It is like the technology is finally catching up to the physical world. We have had the brains for a while, but we are just now getting the eyes and the hands in place.
And that brings us to the future. If we look at what Daniel was asking, the trajectory is clear. We started with experts codifying their knowledge into rules. Then we moved to machines finding their own rules in data. And now we are at the point where these machines can explain their reasoning to us, or at least interact with us. But the core mission remains the same: finding patterns that are too complex or too vast for the human brain to process alone.
I think the takeaway for our listeners is that they shouldn't fear AI as some new, alien force. It has been a silent partner in our civilization for half a century. It has been keeping our planes in the air, our power grids stable, and our financial systems secure. The fact that it can now talk to us is just the latest chapter, not the whole book.
That's it. And honestly, I find a lot of comfort in that. These systems have been tested in the real world for a long time. They aren't just theoretical. When you see an AI being used for medical imaging today, it is backed by decades of research into computer vision and pattern recognition. It is not just some random guess by a chatbot.
It is a solid foundation. And hey, if you are finding this deep dive into the hidden history of AI interesting, we would really appreciate it if you could leave us a review on your podcast app or Spotify. It genuinely helps other people find the show and join our weird little community.
It really does. We love seeing those reviews pop up. It makes all the late-night research sessions worth it.
So, looking ahead, what do you think is the next big non-conversational frontier? We have done space, we have done medicine, we have done finance. What is left?
I think it is the climate. We are starting to see AI used for incredibly complex climate modeling. NVIDIA has a project called Earth-two, which is a digital twin of the entire planet's atmosphere. They use it to predict extreme weather events with much higher resolution than traditional models. Or using AI to design new materials for solid-state batteries. These are problems with thousands of variables that are perfect for the kind of non-conversational AI we have been talking about.
It is the ultimate optimization problem. If we can use AI to find oil, we can certainly use it to find the best way to move away from it.
That is the hope. And it is a powerful one. The same tools that were forged in the Cold War to detect submarines might end up being the tools that help us save the planet. There is a certain poetic symmetry to that.
It really is. It shows that technology is neutral; it is all about how we deploy it. From sorting mail to saving lives, AI has been on a long journey. And we are just getting started with the most interesting parts.
Well said, Corn. I think we have given Daniel plenty to chew on. It is a reminder that to understand where we are going, we really have to look at where we have been. The history of AI is much deeper and much more industrial than the current headlines suggest.
Definitely. Thanks to Daniel for sending that in. It is always good to step back from the hype and look at the real engineering that makes our world work.
It has been a blast digging into this with you.
Same here, Herman. And to everyone listening, thanks for joining us on this episode of My Weird Prompts. You can find all our past episodes, including our earlier discussions on AI and remote sensing, at myweirdprompts dot com. We also have an RSS feed there if you want to subscribe.
And if you have your own weird prompt or a question you want us to explore, there is a contact form on the website. We love hearing from you.
Until next time, I am Corn.
And I am Herman Poppleberry.
Thanks for listening. Goodbye!
Goodbye!