Hey everyone, welcome back to My Weird Prompts. It is February nineteenth, twenty twenty-six, and we are sitting here in our usual spot in Jerusalem. The sun is just starting to dip behind the Judean hills, casting that long, golden light over the stone walls, and I have to say, I have had a very strange, very persistent melody stuck in my head all morning. It is driving me a little bit crazy, Herman.
Herman Poppleberry here. And let me guess, Corn, given the prompt we are looking at today, is this melody perhaps a rousing, high-production march about a specific, pungent root vegetable?
It absolutely is. I have been humming the chorus to Onion in the Pan since six AM. Today’s prompt comes from Daniel, and it is a deep dive into the rapid, almost vertical rise of artificial intelligence generated music. Specifically, Daniel pointed us toward this track he found on Suno called Onion in the Pan. It is exactly what it sounds like—a celebration of the humble onion, but it is set to this incredibly professional-sounding, cinematic marching tone. And honestly, it is catchy. It is dangerously catchy. It sounds like something out of a high-budget Pixar musical.
It really is an earworm. I spent about three hours yesterday diving into the Suno and Udio archives after Daniel sent that over, and the progress is staggering. What is fascinating to me, looking back from our vantage point in early twenty twenty-six, is how quickly we moved from those early, slightly distorted clips of twenty twenty-three that sounded like they were recorded underwater through a broken radio, to something like Onion in the Pan. We have reached a point where the production value—the mixing, the mastering, the vocal timbre—is virtually indistinguishable from a top-tier studio recording.
That is the tipping point Daniel mentioned in his message. A few years ago, Suno and its competitors were just fun curiosities. You would type in a prompt like "death metal song about a toaster" and get something that was mostly a joke—glitchy, weird, and clearly synthetic. But now, the industry has professionalized. We are seeing these apps marketed not just at people who want to make a funny song for a birthday party, but at actual working musicians. They are using them for generating hooks, for complex stem separation, and for filling out arrangements.
Exactly. And that is where the friction starts. I have been following the massive backlash to the recent ad campaigns from these companies. There is a real sense of existential dread in the creative community right now. People are asking the hard questions: if a machine can write a perfect pop hook, arrange the brass section, and produce a rousing march about an onion in thirty seconds for the cost of a few cents, what happens to the person who spent twenty years learning the craft? What happens to the session musician in Nashville or the jingle writer in London?
It is a heavy question, and it gets to the heart of what Daniel is asking. But I want to start with the technical point he raised about the blurry line. Daniel asked if we have not already been using artificial intelligence in music editing for a long time. He is right, isn't he? Think about digital audio workstations like Ableton, Logic Pro, or Pro Tools. We have been using sophisticated algorithms for pitch correction, time-stretching, and even algorithmic drum patterns for decades. So, Herman, as our resident deep-diver into the tech, where is the actual line between a tool and a replacement?
That is the million-dollar question, Corn, and the answer has shifted significantly in just the last twenty-four months. To understand it, we have to look at the evolution of how we interact with sound. In the past, tools were mostly subtractive, corrective, or assistive. If you used Auto-Tune in the early two thousands, you still had to have a human singer provide the raw material—the breath, the phrasing, the intent. The software was just nudging the frequencies to the nearest semitone. Even with something like algorithmic composition in the late nineties, a human had to set the parameters, choose the scales, and curate every single output.
Right, so the human was the architect and the software was just the high-tech power tool. You still had to swing the hammer, even if the hammer was digital.
Precisely. But what we are seeing now with generative models—especially the V-five and V-six models that dropped recently—is a fundamental shift from tools to agents. When you use a generative model like Suno, you are not providing the raw material. You are providing a high-level intent, a prompt. You say, "make me a folk song about an onion in the style of a British marching band," and the model handles the melody, the harmony, the lyrics, the vocal performance, and the entire mixing chain. That is a shift in the locus of creativity. The machine isn't just helping you build the house; it is designing the house, sourcing the materials, and laying the bricks while you just watch from the sidewalk.
But is it really that different from a high-level producer? If I am a producer and I hire a session singer, a songwriter, and an arranger, I am not playing the instruments. I am the one who had the idea, and I am steering those people to get the exact tone I want. If I steer the artificial intelligence to get Onion in the Pan exactly right, am I not still the producer?
You are definitely a director or a curator. But the music industry is struggling with the fact that the labor—the actual physical sweat and the years of practice required to play that trumpet part or hit those high notes—is being bypassed. That is why the backlash is so intense. It is not just about the idea; it is about the devaluation of the skill. When the barrier to entry for "professional-sounding" music drops to zero, the market gets flooded. We are seeing millions of tracks uploaded to streaming services every month that were generated in seconds.
I see that side of it, and it is scary for professionals. But let’s look at the other side of Daniel’s prompt. He mentioned using this for royalty-free music for YouTube or documentaries. I mean, Herman, have you ever tried to find the perfect piece of library music? It is a nightmare. You spend four hours scrolling through tracks that are almost right, but maybe the bridge is too long, or the mood is slightly too aggressive. If a small creator can generate a custom track that fits their video perfectly, without worrying about copyright strikes or paying five hundred dollars for a license, that is a massive win for democratization.
It is a massive win for the end user, but it is a direct hit to the livelihoods of the people who make that library music. There is a whole sector of the industry—composers who never wanted to be rock stars but made a great living writing "tense background music" or "happy acoustic guitar tracks"—that is being decimated. When we talk about the line of acceptability, I think we have to distinguish between different types of music. There is music as art, which is about human connection, shared history, and expression, and then there is music as a utility—the sonic wallpaper for a corporate presentation or a cooking video.
Do you think we are going to see a permanent split there? Like, a "Human-Made" certification versus "Utility" music?
I think we are already seeing the beginning of it. Some platforms are starting to experiment with "Verified Human" badges. But the line is getting blurrier because the artificial intelligence is getting better at the "art" part. It can now simulate the emotional cracks in a voice, the slight hesitation before a beat, or the imperfections in a guitar strum that we usually associate with the human soul. It is mimicking the "ghost in the machine."
That is the part that actually creeps me out a bit. I was listening to a generated blues track the other day that had this little intake of breath before the chorus. It was entirely synthesized, but it felt so human. It felt like a trick. It felt like the machine was lying to me about having lungs.
It is a statistical trick, Corn. The model has been trained on billions of hours of human audio. It has learned that humans breathe before they hit a big note because that is what the data shows. It isn't feeling the emotion; it is just predicting the next most likely sound wave based on a massive probability map. But here is the philosophical kicker: if the listener feels the emotion, does the origin of the sound wave matter? If a teenager in their bedroom cries while listening to an artificial intelligence song about heartbreak, is that grief less real?
I think it matters to the artist, and it certainly matters to the legal system. Daniel brought up rights and livelihoods. We have seen some massive legal developments in the last year, haven't we? The courts are finally starting to weigh in on this.
Oh, absolutely. The landscape in twenty twenty-six is very different from twenty twenty-four. The big one, of course, is the ongoing litigation involving the major record labels—Sony, Universal, Warner—against the generative artificial intelligence giants. The core of the argument is about the training data. These models did not learn to make music in a vacuum. They were trained on the entire history of recorded music, much of which is under copyright. The labels are saying, "You cannot use our artists' work to build a product that then competes with those very same artists and replaces them."
It feels like the same battle we saw with Napster at the turn of the millennium, just in a different form. Back then it was about how music was distributed; now it is about how music is fundamentally created.
It is Napster on steroids. With Napster, the song was still the song. You were just moving a file. Now, the song is being broken down into mathematical patterns—vectors in a high-dimensional space—and used to spawn an infinite number of new songs. The legal defense from companies like Suno and Udio is usually based on "Fair Use." They argue that the models are transformative—that they are not just copying the music, but learning the underlying principles of music theory, style, and timbre, much like a human music student would by listening to the radio.
But a human student can only listen to so many songs in a lifetime. An artificial intelligence can ingest everything ever recorded, from Gregorian chants to K-pop, in a single weekend. The scale of it feels like it should change the legal category. You can't compare a kid learning a guitar riff to a server farm absorbing the entire Spotify catalog.
I agree, and the courts are starting to see it that way too. We are seeing the emergence of "Data Sovereignty" laws. Some jurisdictions are moving toward requiring licenses for training data. This would be a huge shift. It would mean these companies have to pay into a collective fund that goes back to the artists whose work was used to train the model. It is a bit like a mechanical royalty for the machine age.
That seems like a fair middle ground, but how do you even track that? If I generate a song that sounds vaguely like a mix of Queen, Radiohead, and a little bit of Dolly Parton, how do you decide who gets paid? Is it ten percent to the Freddie Mercury estate? Five percent to Thom Yorke?
That is the technical nightmare we are currently living through. We are looking at things like digital watermarking and "audio fingerprinting" that can detect the influence of specific training sets. There is even talk of using blockchain-based royalty tracking, though that is still pretty speculative. What is more likely in the short term is a "blanket license" model. The big labels partner with the artificial intelligence companies, give them access to their catalogs, and take a massive cut of the subscription revenue.
Which, of course, mostly benefits the big labels and the massive tech firms, and not necessarily the independent artists or the session musicians Daniel is worried about.
Exactly. The "middle class" of music is the one in the crosshairs. The people making hooks for hip-hop producers, the people doing session work for commercials, the people writing jingles for local radio. Those jobs are being automated at a terrifying rate.
I want to go back to Daniel’s point about the tools that musicians are using right now, because it isn't all doom and gloom. He mentioned stemming and hook generation. For those who do not know, stemming is the process of taking a finished, mixed track and separating it back into its individual parts—the vocals, the drums, the bass, the guitar.
And that is an area where artificial intelligence has been an absolute miracle for musicians. Before these neural networks, separating a vocal from a backing track was nearly impossible without the original master tapes. It always sounded muffled, like the singer was trapped in a tin can. But now, with tools like RipX or the latest updates to Logic Pro, you can get a studio-quality vocal stem from a recording made in the nineteen fifties in about ten seconds.
And musicians are using that for incredible things. They are remixing old tracks, sampling things that were previously un-sampleable, and using it as an educational tool to hear exactly how a specific bassline was played. In that sense, artificial intelligence is empowering human creativity. It is giving us new ways to interact with the history of music rather than just replacing it.
It is. And even hook generation is being used by some songwriters as a way to break through writer’s block. They might generate twenty different melodies for a chorus, find one that has a weird interval they never would have thought of, and then throw away the machine version and rewrite it, play it on a real piano, and make it their own. It is like having a co-writer who never gets tired, never complains about the coffee, and has no ego.
So maybe the line of acceptability is about how much of the "heavy lifting" the human is doing? If I use artificial intelligence to give me a spark, but I play the instruments and I sing the lyrics, that feels acceptable to most people. But if I just click "generate" and put my name on it, that feels like a form of creative theft, or at least creative laziness.
But here is the uncomfortable truth, Corn. The audience often does not know, and increasingly, they do not care. If you go on the major streaming platforms right now, there are thousands of tracks in those "Chill-hop" or "Lo-fi Study Beats" playlists that are almost certainly artificial intelligence generated. They are designed to be background noise. The listeners just want a vibe. They don't care if a human sat in a room and felt sad to write that piano melody. They just want something to help them focus on their spreadsheets.
That is a bit depressing, Herman. It turns music into a commodity, like electricity or water. You just turn the tap and the "vibes" come out.
It is a shift in how we consume music. We are moving from music as an identity-defining art form—where you are a "Radiohead fan" or a "Swiftie"—to music as an environmental utility. If you are just looking for something to help you sleep or help you run faster, the origin of the sound matters less than the frequency response.
I wonder if this will lead to a massive resurgence of live music. If we get to a point where recorded music is so cheap and ubiquitous that it loses its perceived value, will the only thing people are willing to pay for be the experience of seeing a human being actually perform in real time? The "I was there" factor?
I think you are onto something there. We have seen this cycle before in art history. When photography was invented in the nineteenth century, people said it would be the death of painting. Why would anyone pay for a portrait when a camera can capture a likeness perfectly in seconds? But instead, it pushed painting to become more abstract, more expressionistic, more about the "touch" of the artist. It made the physical act of painting more valuable because it could not be replicated by a machine.
So artificial intelligence might push music to become more human? More about the raw energy, the sweat, the mistakes, the things that a probability model cannot quite capture?
That is the hopeful view. We might see a move away from the hyper-polished, perfectly gridded, pitch-corrected pop music that has dominated the last twenty years, because that is the easiest thing for an artificial intelligence to mimic. Maybe we will go back to raw, live-to-tape recordings where you can hear the room, the chair squeaking, and the slight imperfections that tell you a person was there.
I love that idea. But let’s talk about the practical side for a second. Daniel mentioned that Suno is being marketed to casual users. We are seeing this trend of "democratizing creativity." Is that a bad thing? If someone who has no musical training—maybe someone with a disability that prevents them from playing an instrument—can finally express an idea they have had in their head for years, is that not a net positive for the world?
It is a double-edged sword. On one hand, yes, more people expressing themselves is wonderful. But there is also the problem of the signal-to-noise ratio. If everyone on earth is generating ten songs a day, how do we find the things that are actually meaningful? We are already being flooded with content. The human brain isn't evolved to filter through a billion songs a year.
We are going to need artificial intelligence to help us filter the artificial intelligence music. It’s a loop.
We already do! That is what the recommendation algorithms are. They are artificial intelligences deciding which human or non-human music you should hear. We are already living in that hall of mirrors.
You know, I was thinking about the Jerusalem music scene while we were talking. We have so many incredible musicians here, people who play these ancient instruments like the oud or the kamancheh. Those instruments are all about the microtones—the tiny, subtle shifts in pitch that are not on a standard Western piano scale. I wonder how well these models handle that kind of cultural specificity.
Right now, honestly, not very well. Most of these models are trained on a very Western-centric diet of pop, rock, and jazz. They struggle with non-Western scales, like the Maqam system here in the Middle East, or the complex rhythmic cycles of Indian classical music. But that is changing. As more diverse data is added, the models will get better at mimicking those styles too.
And that brings up the question of cultural appropriation. If an artificial intelligence company in California trains a model on the traditional music of a specific indigenous group without their permission, and then sells a tool that lets anyone in the world generate that music for a fee, who owns that? Who gets the credit?
That is a massive ethical minefield. We are seeing a lot of discussion in twenty twenty-six about "Data Sovereignty." The idea that communities should have collective control over how their cultural heritage is used to train these models. It is not just about individual copyright; it is about the collective ownership of a tradition that took a thousand years to build. You can't just "scrape" a culture and sell it back to them.
It feels like we are at this moment where the technology is sprinting ahead of our legal and ethical frameworks. We are trying to apply nineteenth-century copyright laws to twenty-first-century neural networks. It’s like trying to regulate a fusion reactor with rules written for a steam engine.
We are definitely playing catch-up. And the pace of improvement is what really blows my mind. Daniel mentioned that Suno was just a curiosity a couple of years ago. Now, we are talking about real-time generative music. There are startups now working on music that changes based on your heart rate, your GPS location, or the weather outside.
Wait, like a dynamic soundtrack to your actual life?
Exactly. Imagine you are going for a run, and as your pace increases, the music automatically builds in intensity and tempo. Or you are walking through the Old City here in Jerusalem, and the music becomes more atmospheric and incorporates local sounds as you get closer to the market. That is a completely new kind of musical experience. It is not a static song; it is a living, breathing composition that only exists for you in that specific moment.
That sounds incredible, but also a little bit lonely, Herman. You can’t share that experience with anyone else. It is a private concert for one, tailored by an algorithm.
That is the trade-off of the twenty-twenties. We gain personalization, but we might lose the shared cultural touchstones. Everyone used to listen to the same top forty radio stations. Now we have our own algorithmic bubbles. Generative music will just take that to the extreme. We will all be the protagonists of our own private movies, with our own private soundtracks.
So, where do we draw the line? Daniel asked where the line of acceptability is. For me, I think it comes down to transparency. I don’t necessarily mind if a track is artificial intelligence generated, but I want to know. I want it to be labeled, just like we label genetically modified food.
Labels are a good start, but they are incredibly hard to enforce. There is a lot of pressure right now for streaming platforms to require an "AI-Generated" tag. But what if a song is ninety percent human and ten percent artificial intelligence? What if the drums are generated but the vocals and guitar are live? Is that an "AI song"? Where do you set the percentage?
It is the blurry line again. Maybe we need a "Nutrition Label" for music. Percent human, percent machine, percent recycled samples.
I can see the marketing now. "One hundred percent organic, grass-fed human music. No algorithms used in the making of this record."
Jokes aside, I think for creators like Daniel, the line of acceptability is about intent. If you are using it to solve a practical problem—like finding royalty-free music for a video about onions—it is a tool. If you are using it to deceive people into thinking you have a talent or an emotional depth that you haven't actually put the work into, that is where it gets murky.
But isn’t that what all music technology has done? Before the piano, you had to have a whole orchestra to get that kind of harmonic range. Before the synthesizer, you needed a room full of musicians to get those otherworldly sounds. Every new technology makes it easier for one person to do the work of many. We are just reaching the logical conclusion of that trend.
That is true. But those technologies still required a physical interface. You had to learn to play the piano keys. You had to learn how to patch a modular synth. With generative artificial intelligence, the interface is just language. You just say what you want. It removes the physical and technical barrier between the idea and the sound.
And maybe that is what we are actually mourning. The loss of the struggle. There is something about the effort required to make music that we find inherently valuable as a species. When the effort goes to zero, the perceived value feels like it goes to zero too. If anyone can make a masterpiece by typing a sentence, is it still a masterpiece?
That is a deep point, Herman. If masterpieces are everywhere, then nothing is a masterpiece. It becomes a commodity.
And commodities are cheap. I think we are going to see a massive devaluation of recorded music, which will force the industry to find new ways to create value. Maybe that is through limited edition physical releases, or exclusive live experiences, or even some kind of digital provenance—knowing exactly who made this and why.
It’s funny you mention digital provenance. I remember the whole N F T craze a few years back. It feels like a lifetime ago now, but the underlying idea of tracking rights and ownership on a ledger... maybe that was just ahead of its time?
The hype was definitely ahead of the utility. But the underlying technology for tracking rights—that part still has potential. If we could link every artificial intelligence generated track back to the specific data it was trained on, we could actually have a functional royalty system. But that would require the tech companies to be transparent about their "black box" models, which they are currently fighting tooth and nail against.
Because they know that as soon as they admit what is in the box, the lawsuits become much easier to win. It is a high-stakes game of chicken right now between the tech giants and the entertainment industry.
While we are on the topic of the industry, I saw a report recently about how film and television composers are reacting. A lot of them are terrified. Background scores for television procedurals are one of the last reliable ways for a composer to make a middle-class living. And that is exactly the kind of thing these models are getting really good at.
So, if you are a composer in twenty twenty-six, how do you compete? You have to become more than just a provider of sound. You have to become a creative partner. You have to bring a perspective and a human story to the table that isn't just a formula.
You have to be idiosyncratic. I think that is the takeaway for any artist in this new era. Don’t try to out-compete the machine on efficiency or polish. You will lose. Compete on the things the machine doesn't have—a life, a history, a set of values, and a specific, weird point of view.
Like the onion song! The machine made a great march, but it was a human—Daniel—who had the weird, specific idea to make a song about an onion in a pan. The spark of the idea was human. The machine was just the execution.
Exactly. The prompt is the art. In a world of infinite, instant generation, the person who knows what to ask for—and why—is the one who holds the power.
So, Herman, have you actually tried making any music with these tools yet? Besides listening to the onion march on repeat?
I have, actually. I tried to use Udio to make a progressive rock epic about the history of the Hebrew language. It was... interesting. It had some cool moments, some neat synth parts, but it also felt a bit disjointed. It lacked that sense of a single mind guiding the journey over twenty minutes. It felt like a collage of ideas rather than a coherent piece of work.
Maybe that is the current limit. It can do the short, punchy stuff—the jingle, the hook, the three-minute pop song—but it struggles with long-form storytelling and complex emotional arcs.
For now. But I wouldn’t bet against it. Every time we say, "the artificial intelligence can’t do X," six months later there is a research paper showing it doing exactly X.
It’s a wild time to be alive, especially as a creator. I think for our listeners, the best thing they can do is just experiment with these tools. See what they can do, but also stay aware of where the sound is coming from. Support the human artists you love. Go to the small shows. Buy the physical merchandise. Because that human connection is what is going to matter most when the world is flooded with perfect, machine-made songs.
Well said, Corn. And if you are a creator using these tools, be transparent about it. Let your audience in on the process. People value honesty more than they value perfection.
Absolutely. Well, I think we’ve covered a lot of ground today. From the humble onion to the future of global copyright law and the "ghost in the machine." It is a lot to chew on.
It really is. And I’m probably going to have that marching tune stuck in my head for the rest of the week. Onion in the pan... it’s relentless.
Oh, I definitely will. It’s a banger, as the kids used to say back in twenty twenty-four.
Do the kids still say that? I feel like we are officially the "old guys" now, Corn.
Probably. But that is the beauty of being brothers. We can be out of date together.
Fair enough.
Before we wrap up, I want to say a huge thank you to Daniel for sending in this prompt. It really got us thinking about the bigger picture of how we value creativity. And to our listeners, if you have a weird prompt of your own—whether it’s about artificial intelligence, history, or something else entirely—we would love to hear it. You can reach us at show at my weird prompts dot com.
And if you are enjoying the show, we would really appreciate it if you could leave us a quick review on your podcast app or Spotify. It genuinely helps other people find the show and keeps us going.
Yeah, it makes a big difference. You can also find all our past episodes and a contact form at my weird prompts dot com. We are available on Spotify, Apple Podcasts, and pretty much everywhere else you get your audio.
This has been My Weird Prompts. We are coming to you from Jerusalem, and we will be back soon with another deep dive into the strange and fascinating world where technology and humanity collide.
Thanks for listening, everyone. Take care of each other, support your local musicians, and keep those prompts coming.
Goodbye, everyone!
Bye!