Welcome back to My Weird Prompts. I am Corn, and I am joined as always by my brother.
Herman Poppleberry here. It is good to be back in the booth, Corn. I have been looking forward to this one all week. We have a really meaty topic to sink our teeth into today.
We really do. We have a incredibly thoughtful prompt from Daniel. He is looking at the world of assistive technology and how the artificial intelligence boom is not just making our lives more convenient, but fundamentally changing the way people with disabilities interact with the world.
Daniel always has a way of grounding these technical shifts in real human impact. He mentioned in his note how much he relies on speech to text for his own workflow, especially with voice notes, and it got him thinking about the broader implications for the assistive population. We are talking about people who are deaf, hard of hearing, blind, or neurodivergent.
He is asking about the progress we have seen over the last decade and what the future looks like for things like an organization layer for neurodivergence. It is a huge topic, but I think it is one of the most important stories in technology right now. It is easy to get caught up in the hype of large language models writing poetry or generating images, but the real revolution is happening in the tools that grant independence.
It really is. And I think we should start with that observation Daniel made about speech to text. He called it the most transformative aspect of the artificial intelligence revolution for him. And for many, it is the difference between being able to communicate fluently in a digital space and being locked out of it. If you cannot use a traditional keyboard or if your thoughts move faster than your fingers can type, speech to text is not a luxury; it is a lifeline.
Well, let us look at that ten year window he mentioned. If we go back to two thousand sixteen, speech to text was... well, let us be honest, it was pretty hit or miss. You had to train the software to your voice, you had to speak like a robot, and if there was any background noise, the whole thing fell apart. I remember trying to use it in a car back then, and it was basically useless.
Oh, the days of Dragon NaturallySpeaking training sessions. You would sit there for an hour reading pre-written paragraphs about the quick brown fox just so the computer could understand your specific accent or the way you pronounce your vowels. And God forbid you had a cold or a noisy fan in the room. The error rates were high, and the frustration levels were even higher.
Right. And now we have Whisper. Daniel mentioned Whisper, and it really is the gold standard right now. What shifted, Herman? Why did we go from that clunky, brittle system to something that can transcribe a crowded room with ninety-nine percent accuracy?
It is the transformer architecture, Corn. We have talked about this in bits and pieces before, but specifically with Whisper, OpenAI trained it on over six hundred thousand hours of multilingual and multitask supervised data. The sheer scale of the training set allowed it to understand context. Old systems were looking at phonemes, the individual sounds, and trying to match them to a dictionary. Whisper is looking at the whole sequence. It is using what we call an attention mechanism to weigh the importance of different parts of the audio stream.
So it is not just listening to the sound; it is predicting the meaning?
It can guess the next word based on the sentence structure, which is how it filters out noise. If you are in a coffee shop and a spoon clinks, the old system might try to turn that clink into a syllable or a weird punctuation mark. Whisper knows that a clink does not fit into the sentence you are currently speaking about your grocery list. It effectively hallucinates the correct word over the noise because it understands the probability of what you were likely saying.
And that is huge for accessibility. If you are hard of hearing and you are using a live captioning app in a public space, that noise filtering is the difference between following a conversation and just seeing gibberish on your screen. I have seen people using these apps in loud restaurants, and it is incredible to see the text just flow perfectly despite the clatter of dishes and other people talking.
And the cost has plummeted. Daniel mentioned it is cheap and affordable now. That is because these models have become incredibly efficient. You can run a medium-sized Whisper model on a standard smartphone now without even needing a cloud connection. That is a massive win for privacy and for reliability. You do not want your private conversations being sent to a server just to be transcribed, and you certainly do not want the service to stop working if you lose your internet connection in a basement or on a plane.
It also removes the gatekeepers. Historically, assistive tech was incredibly expensive because the market was considered niche. You would have these specialized devices that cost thousands of dollars because the companies had to recoup their research and development costs from a small user base. But because artificial intelligence is a general-purpose technology, the same model that helps a lawyer dictate a brief is the one that helps a person with limited mobility write an email.
That is the curb-cut effect, Corn. It is one of my favorite concepts in urban design and technology. When you put a ramp in a sidewalk for a wheelchair user, you also help the person with the stroller, the person with the delivery cart, and the person on a bicycle. Assistive tech is the ultimate curb-cut. The demand for better voice notes for the general public is what funded the research that made speech to text viable for the deaf community. It is a rare case where the mainstream market actually accelerates the assistive market.
That is one of our two allowed analogies for the day, Herman. Use the next one wisely.
Noted. But it is true. Think about the progress in text to speech as well. Ten years ago, the voices were robotic and grating. Now, we have neural text to speech that sounds indistinguishable from a human. For someone who is non-verbal, being able to communicate with a voice that has emotion, inflection, and personality is a massive shift in how they are perceived by the world. They can choose a voice that actually fits their identity.
Let us pivot to vision, because Daniel mentioned OrCam. For our listeners who might not know, OrCam is an Israeli company that makes a device that clips onto your glasses. It uses a camera and artificial intelligence to read text, recognize faces, and identify products, then whispers that information into your ear. It was revolutionary when it came out.
It really was. But even since then, we have seen a massive leap. We are moving from simple object recognition, like this is a chair, to semantic understanding. If you look at what G P T four o or the latest Gemini models can do with video input, it is staggering. We are talking about real-time scene description that is nuanced.
I saw a demo recently where a blind user was holding their phone up to a kitchen counter, and the artificial intelligence was not just saying there is a stove. It was saying, your burner is still on, or, the milk carton you are holding is expired by two days. That requires a level of reasoning that simply did not exist three years ago. It is not just identifying the object; it is identifying the state of the object and its relevance to the user.
It is the difference between a label and a narrative. Ten years ago, an app might say, person in front of you. Now, an artificial intelligence can say, your friend Hannah is walking toward you and she looks like she is in a hurry, she is carrying a small blue gift box. That level of detail allows a person with vision impairment to participate in the social nuances of a room, not just navigate the physical obstacles. It provides the context that sighted people take for granted.
And that brings us to the neurodivergence aspect Daniel touched on. He mentioned his own use of voice notes for organization and his excitement for an organization layer. We touched on this in episode seven hundred ninety-one when we talked about artificial intelligence agents, but for someone with A D H D or executive dysfunction, this is not just a productivity hack. It is a life-altering support system.
It really is. One of the biggest challenges with A D H D is the transition from a raw thought to a structured action. You have an idea, you record a voice note, but then that voice note sits in a digital graveyard because the effort of listening back, transcribing it, and turning it into a calendar event is too high. The friction of the process is where the system breaks down.
Right, the friction is the enemy. It is that wall of awful that people talk about.
So, what Daniel is talking about, this organization layer, is essentially an artificial intelligence agent that acts as an executive function proxy. It can listen to a three-minute rambling voice note where you mention three different projects, a grocery item you forgot, and a dentist appointment you need to make. The artificial intelligence can then parse that, put the dentist appointment on your calendar, add the milk to your grocery list, and file the project ideas into their respective folders. It does the heavy lifting of categorization and scheduling.
I have seen some early versions of this using Large Language Models to summarize and tag notes, but the real breakthrough will be the proactive side. Imagine the artificial intelligence saying, hey, you mentioned you wanted to call the plumber in that voice note this morning, it is ten A.M. now and you have a gap in your schedule, should I dial the number for you? Or even better, it sees that you have been staring at a blank document for twenty minutes and offers to help you outline the first three paragraphs based on your previous notes.
That is the goal. It is about reducing the cognitive load. And for people on the autism spectrum, we are seeing some fascinating work in emotional recognition and social coaching. There are wearable systems being developed that provide real-time feedback on social cues. If someone is being sarcastic or if a conversation is becoming tense, the artificial intelligence can provide a subtle haptic nudge or a text prompt to help the user navigate that interaction. It is like having a social interpreter in your ear.
That is where it gets a bit controversial for some, though, right? The idea of a computer mediating your social life. Some might say it is making us less human or that it is a crutch.
It is a delicate balance, certainly. But for many, it is about lowering the baseline of anxiety. If you are constantly exhausted from trying to manual-mode your way through social interactions, having an assistant that can handle some of that processing for you is a huge relief. It is not about replacing the human connection; it is about making it more accessible. It is no different than a person using a hearing aid to hear a conversation better. It is just a hearing aid for social and emotional frequency.
We should also talk about the physical side of this. We have focused on software, but artificial intelligence-driven hardware for mobility is seeing some incredible gains. I am talking about artificial intelligence-powered exoskeletons and smart prosthetics that are becoming more intuitive every day.
Oh, the prosthetics are mind-blowing right now. We used to have prosthetics that were essentially just hooks or simple mechanical grips. Then we moved to myoelectric sensors that could read muscle twitches in the residual limb. But now, they are integrating artificial intelligence to predict intent.
How does that work exactly? Is it reading brain waves or just the muscles?
Some are exploring brain-computer interfaces, but many are just using deep learning to analyze the patterns of muscle movement. The artificial intelligence learns that when you move your arm in a certain way, you are usually trying to pick up a coffee cup versus turning a doorknob. It can adjust the grip force and the finger orientation in milliseconds. It makes the prosthetic feel like a part of the body rather than a tool you are operating. It is the difference between driving a car and just walking.
It is amazing how much of this comes back to pattern recognition. Whether it is sound for the deaf, images for the blind, or muscle movements for someone with a limb difference, artificial intelligence is just the world's best pattern-matching engine. It takes messy, real-world data and turns it into something structured and actionable.
It really is. And to Daniel's question about how much it has improved in a decade, the answer is that we have moved from a world of rigid rules to a world of flexible understanding. In two thousand sixteen, if a programmer did not write a specific rule for a scenario, the technology failed. If you encountered a type of door handle the software had not seen, it was stuck. Today, the technology can generalize. It can see a type of chair it has never seen before and still know it is a chair because it understands the essence of chair-ness.
That flexibility is what makes it truly assistive. Because the world is messy. A blind person is not navigating a laboratory; they are navigating a sidewalk with construction, puddles, and distracted pedestrians. A neurodivergent person is not managing a perfect to-do list; they are managing a life that changes by the hour. The artificial intelligence has to be as fluid as the life it is supporting.
You know, Daniel mentioned his son Ezra, who was born just last year in two thousand twenty-five. By the time Ezra is a teenager, the idea of a device not being able to understand your voice or see what you see will probably seem like an ancient relic. Like a rotary phone or a paper map. He will grow up in a world where the environment itself is responsive to human needs.
It is a hopeful thought. But I want to push back on one thing, Herman. We keep talking about how cheap and accessible this is. But is there a risk of a new digital divide? If these organization layers and vision assistants become essential for living an independent life, what happens to the people who cannot afford the latest hardware or the subscription for the best model? We are already seeing the best artificial intelligence models being locked behind twenty-dollar-a-month paywalls.
That is a very real concern, Corn. We are seeing a lot of these tools move to a subscription model. If you rely on a vision assistant to navigate your city, and you cannot pay your monthly bill, do you effectively lose your sight in that digital context? That is a terrifying prospect. It creates a dependency on a corporate entity for basic biological or cognitive functions.
It is. And it is why we need to advocate for these things to be treated like medical necessities, not just consumer gadgets. In many countries, insurance will pay for a traditional wheelchair but not for a high-end artificial intelligence assistant that might actually provide more independence. That policy lag is where the real friction is now. The technology is ready, but the bureaucracy is still stuck in nineteen ninety-five.
I agree. The technology is outstripping the social and legal frameworks. We saw this with the voice biometric dilemma we discussed in episode six hundred fifty-nine. The tech exists to secure our lives with our voices, but the security risks and the lack of regulation are holding it back. It is the same with assistive artificial intelligence. We need a bill of rights for digital assistance.
Let us go back to the neurodivergence side for a moment. Daniel mentioned that organization layer. I think there is another aspect to that, which is the filtering of information. We live in a world of sensory overload. We talked about this in episode four hundred thirty-five. For someone with A D H D or sensory processing issues, the sheer volume of notifications, emails, and sensory input can be paralyzing.
Artificial intelligence is the perfect filter. We are starting to see browsers and operating systems that can summarize an entire webpage into three bullet points or silence all notifications except for the ones that the artificial intelligence deems truly urgent based on your current context. It can recognize that if you are at work, a notification about a sale at a clothing store is noise, but a message from your child's school is a signal.
I love the idea of a digital noise-canceling headphone for your whole life. Not just for sound, but for information.
That is exactly what it is. It is cognitive noise cancellation. If you are overwhelmed, you can tell your artificial intelligence, give me the absolute minimum information I need to get through the next hour. And it can hide the rest. That kind of agency over your own attention is something we have never really had before. We have always been at the mercy of the loudest notification.
It also helps with the opposite problem, which is getting started on a task. The blank page syndrome. For a lot of neurodivergent people, the barrier to entry for a task is the overwhelming number of steps required. Artificial intelligence can break a big project down into tiny, manageable micro-tasks. It can say, do not worry about the whole report, just write the first three sentences of the introduction.
And it can do it without judgment. That is a huge factor. There is often a lot of shame associated with needing help for basic tasks. An artificial intelligence does not get frustrated if you ask it for the tenth time how to boil an egg or how to format a spreadsheet. It does not roll its eyes. It just gives you the answer. That emotional safety is a huge part of why these tools are being adopted so quickly. It is a safe space to fail and learn.
That is a great point. It is a non-judgmental coach. I think about Daniel and Hannah raising Ezra. They are both tech-savvy, they are using these tools. Ezra is going to grow up with an artificial intelligence that knows him, that understands his learning style, and that can adapt to his needs as he grows. Whether he turns out to be neurotypical or neurodivergent, that personalized support is going to be a baseline for his generation. It is like having a tutor that has been with you since birth.
It really is. And it is not just for children. Think about the aging population. We are seeing a massive increase in dementia and age-related vision and hearing loss. Artificial intelligence assistants can help seniors stay in their homes longer. They can remind them to take their medication, identify who is at the door, and provide social interaction to combat loneliness. It is a way to maintain dignity and independence in the face of physical decline.
It is the ultimate scaling of care. We do not have enough human caregivers to meet the demand of the aging population, but we can deploy artificial intelligence assistants to handle the routine tasks, freeing up human caregivers for the things that require actual empathy and touch. It is about augmenting the human element, not replacing it.
And I think that is the key takeaway here. Artificial intelligence is not replacing humans in the assistive space; it is augmenting the capabilities of the individuals and the people who care for them. It is giving people back their time and their agency.
So, looking forward, what is the next big leap? We have Whisper for speech, we have G P T for reasoning, we have vision models. What is the missing piece? What are we going to be talking about in two thousand thirty-six?
I think it is the integration. Right now, these are all separate apps or devices. You have your hearing aid, your phone, your glasses. They talk to each other a little bit, but it is still fragmented. The next ten years will be about the seamless integration of these sensors into a single personal artificial intelligence. Something that lives across all your devices and has a persistent understanding of your needs. It will not be an app you open; it will be a layer of your existence.
Like a digital twin that acts as an intermediary between you and the world.
If you are deaf, your digital twin is constantly listening and providing captions on whatever screen you are looking at, whether it is your phone, your laptop, or the smart glass in a taxi. If you are blind, it is constantly scanning your environment and giving you a spatial map through haptic feedback in your shoes or audio in your ears. It is a persistent, invisible layer of support that follows you everywhere.
And for Daniel's organization layer, it means the artificial intelligence does not just wait for you to record a voice note. It is aware of your emails, your calendar, your location, and your stated goals. It can see that you are at the grocery store and remind you that you mentioned being out of eggs in a conversation three days ago. It connects the dots that our human brains often miss.
That is the level of context that makes a tool feel like an extension of yourself. And that is where we are headed. We are moving from artificial intelligence as a tool we use to artificial intelligence as a part of how we perceive and interact with reality. It is a fundamental shift in the human experience.
It is a bit mind-bending when you put it that way. But for someone whose natural perception is limited in some way, that artificial perception is a bridge to a much fuller life. It is not about being a cyborg; it is about being more fully human by removing the barriers that hold you back.
It is. And I think we should acknowledge that the people driving this innovation are often the users themselves. The disability community has always been at the forefront of hacking technology to make it work for them. A lot of the features we take for granted in our smartphones today, like predictive text or voice commands, started as accessibility hacks. The rest of the world is just finally catching up to what the assistive community has been doing for decades.
That is a great point. We owe a lot to the people who pushed for these features when they were still considered niche. They were the pioneers of the interface.
We really do. And as Daniel said, it is a total game-changer and a liberation. That word, liberation, is so important. Technology is at its best when it removes barriers to human potential. It is not about the silicon or the code; it is about the person who can now do something they could not do yesterday.
Well said, Herman. I think we have covered a lot of ground here, from the technical shifts in transformer models to the social impact of the curb-cut effect. It is a lot to process, but it is incredibly exciting.
It is. And I hope this gives Daniel some food for thought for his next voice notes. I am sure he is already thinking of ways to use that organization layer once it fully matures. I can imagine him having a much more streamlined day once his artificial intelligence starts doing the heavy lifting of his scheduling.
I have no doubt. Before we wrap up, I want to remind everyone that if you are interested in the deeper mechanics of these models, we did a whole episode on the evolution of artificial intelligence logic and the doctoral-level reasoning we are seeing in two thousand twenty-six. That was episode six hundred fifty-two, and it is a great companion listen to this discussion. It goes into the math behind the reasoning.
It really is. And if you want to dive into the history of how we got here, check out episode five hundred ninety-nine on the invisible artificial intelligence. It covers the decades of innovation that happened before Chat G P T became a household name. It is important to remember that this did not happen overnight.
Great suggestions. And hey, if you have been enjoying My Weird Prompts, please take a moment to leave us a review on your podcast app. Whether it is Spotify, Apple Podcasts, or wherever you listen, those reviews really do help new people find the show. We love seeing the community grow.
They really do. We appreciate all of you who have been with us for these eight hundred-plus episodes. It is a privilege to explore these topics with you. We learn as much from your prompts as you do from our discussions.
It is. You can find our full archive and a contact form at myweirdprompts dot com. If you have a prompt you want us to tackle, you can reach us at show at myweirdprompts dot com. We love hearing from you, especially the weird stuff.
Thanks again to Daniel for the prompt. It was a great one to dig into. It really reminded me why we do this show.
This has been My Weird Prompts. I am Corn.
And I am Herman Poppleberry.
We will see you next time.
Goodbye, everyone!
Herman, I think we actually stayed under our analogy limit today. I was worried when you started talking about the curb-cut.
I was counting! Only two. I am quite proud of us. I almost went for a Swiss Army knife analogy, but I held back.
Me too. Let us go see if we can find a way to automate our own grocery lists. I am tired of forgetting the cilantro.
I am already on it. I have got a script running that parses our last three dinner conversations and cross-references it with the pantry inventory.
Of course you do. See you later, Herman.
See ya, Corn.
This has been My Weird Prompts. Thanks for listening. We will catch you in the next episode for more human-artificial intelligence collaboration. Until then, stay curious.
And keep those prompts coming! We thrive on the weird and the wonderful.
Signing off for real this time. Take care, everyone.
Bye!
Wait, Herman, did you remember to tell them about the R S S feed?
I mentioned the website, it is all there. They know the drill.
Right, right. Just making sure. Okay, let us go.
After you, brother.
Seriously, I am ending the recording now.
Do it.
Done.
You did not actually stop it, did you?
I am doing it now!
See you on the other side.
Goodbye!
Bye!
Okay, now it is stopped.
Finally.
Wait, I can still see the levels moving.
Corn! Just hit the button!
I am hitting it! I am hitting it!
Okay, see you at dinner. Hannah is making that lasagna.
Oh, I am there. Ezra is going to love it.
He is already a fan of the sauce.
Kid has got good taste. Alright, for real this time. Goodbye!
Goodbye!