

Image credit: Possessed Photography via Unsplash
A few years ago, conversations about robots coming for our jobs felt like light banter for those of us in the business of making music-for-media. In 2025, shit got real.
Last year, generative AI music platforms went from novelty to scarily impressive. We all heard the viral retro-soul remakes of big hits, saw AI “bands” racking up several million streams on Spotify, and witnessed an AI song top a Billboard chart for the first time.
I don’t think anyone with a functioning set of ears would say that AI is making music with the creativity or true artistry of human beings. But the evolution of the tech last year definitely had us sitting up a bit straighter in our chairs. AI music became a strange, ubiquitous presence, and a not-so-subtle reminder that the ground beneath our feet was starting to move.
At the same time that a global trade war and an ad industry mega-merger were rattling nerves and causing some real pain across our industry, AI anxiety was reaching a fever pitch. The killer music robots were no longer theoretical. They were an impending, existential threat to our businesses. And the idea that our clients might soon start generating good-enough AI music for some of their campaigns, rather than reaching out to us to shoot for something truly great, began to feel like a near-term inevitability.
There were, though, a couple precarious walls between us and the onslaught of the machines. Firstly, the most impressive gen AI music was definitely not “brand-safe”, as the two leading platforms - Suno and Udio - had trained their models on most of the recorded music available across the entire internet, and were being sued by the three major record companies, amongst others. There were other gen AI platforms which were arguably “brand-safer”, as they’d trained on licensed catalogues, but their music was mid-tier at best (more on this later.) Secondly, from a client perspective, shaping gen AI music into something properly customized for a high-end campaign still demanded real time, taste, and skill.
Despite these barriers, the level of fear in our industry was palpable. Then the story began to shift.
Many in our industry assumed the big three record companies would eventually move in lockstep, settle with Suno and Udio at roughly the same time, agree on some form of revenue share, and allow the platforms to remain functionally intact, if perhaps brand-safer around the edges. But what unfolded in Q4 of last year reframed the conversation.
Warner, one of the big three, announced unilateral deals with both Udio and Suno that signaled a clear shift away from the broad, unlicensed training that made their output so compelling, and toward new, licensed models built from label-controlled recordings whose artists explicitly opt-in.
Universal has since followed suit with its own similar deal, reinforcing that this is now the direction of travel for the category as a whole.
In plain terms, the versions of these platforms that blew everyone’s minds are on borrowed time. What comes next will be shaped by newly licensed datasets, artist opt-ins, and a reset of how these models are trained. For users who got used to the range, speed, and musical fluency of the earlier models, that transition may feel like a step back, qualitatively. That could have real implications for how these tools perform in the hands of brands, agencies, and music creators.
It’s also worth stating this plainly: training on licensed datasets does not automatically equal fully brand-safe output. For brands, “safe” means well-defined authorship, clean chain of title, and indemnification. Even in the era of major-label deals, those protections are not yet meaningfully on offer from the leading gen AI music platforms.
But beyond the legal math, these new deals raise a key qualitative question: will the “retrained” versions still have the wow factor that they sometimes served up in their “train on everything” heyday?
Suno and Udio haven’t technically been the only games in town. I mentioned above that there are other gen AI platforms that may have been brand-safer, trained only on owned or licensed catalogues - but those catalogues generally haven’t included any of the iconic artists we all know and love. Having tried many of them, here’s my blunt take: they’re not very good. That’s not a knock on their tech or their ethics. It’s a reflection of their inputs - or lack thereof. Suno and Udio, for better or for worse, essentially stood alone with respect to output quality in their “train on everything” era.
This raises another big question: just how important are genre-defining, culture-shaping recording artists to the training datasets of gen AI music platforms? Without highly-inclusive exposure to the icons, scenes, and movements that shaped modern music, might even the best AI output start to feel more… beige?
Even with deals now in place with multiple major labels, access isn’t automatic. As of now, very few major artists or estates have publicly confirmed that they’ll be opting in. Within the Warner system alone, no major artist or estate that I’m aware of has definitively confirmed that they’re game - not Charli XCX, Dua Lipa, Bruno Mars, Cardi B, Prince, Fleetwood Mac, Green Day, Madonna, Led Zeppelin, Ramones, Otis Redding, Aretha Franklin, Frank Sinatra, Ray Charles… I could go on and on.
And that’s before you get to the long list of genre-defining artists outside of those deals, or artists who may simply choose not to participate at all. Whether it’s Beyoncé, Kendrick Lamar, Bad Bunny, Billie Eilish, Jay-Z, The Beatles, Elvis Presley, Michael Jackson, Stevie Wonder - you get the idea.
The reason the “train on everything” platforms had us gasping and clutching our pearls every few weeks at how relatively dope some of the gen AI music was getting, while the brand-safer platforms paled in comparison, had everything to do with the fact that the former trained across pretty much all of our genius, messy, virtuosic, unpredictable, genuine artists.
My hunch is that if you start removing even a chunk of that material from their training datasets, the output could feel much further removed from the best of human musicality.
You don’t get drums that swing like that. Or a horn section that pops like that. Or amazing vocals like that by accident or by training on anything less than the best music by the best artists the world has ever known. Those things aren’t abstractions. They’re the accumulation of very specific musical decisions, made by very extraordinary people, over time.
This isn’t to suggest that models trained on licensed datasets won’t eventually get more inclusive and qualitatively richer - they almost certainly will - but the path and timeline are fuzzy.
Whether major artists can be lured in at scale, through label pressure or financial incentives, is still an open question. And there’s no clear public information about what artists actually get in return.
Let me be clear. I’m not arguing that some grand war against the music robots has been won, leaving them forever in the “friend-zone”.
I’m writing this at a snapshot in time, about a subject that’s moving at breakneck speed. So while the general anxiety about whether gen AI music will have our heads on a pike by Spring has eased a bit, the situation remains fluid, and we’re still very early in understanding the longer-term effects these platforms will have on companies like ours, and on music creators more broadly.
The dust will settle at some point on the whole Suno/Udio/labels saga and the related legal questions. New AI music platforms and tools will continue to emerge, and brands, agencies, and creators will keep finding new ways to use them. Our collective taste will evolve, and the line between human-made and AI-generated music will blur.
But even if we get to a point where gen AI music is genuinely great and fully brand-safe, I doubt that will magically solve the hardest part of our job. Music still has to be interpreted, shaped, and pushed until it actually works for a specific brand, a specific moment, and a specific audience.
That’s the part of the process we spend most of our time on, working closely with incredible artists and collaborators to turn raw ideas into something that really resonates.
It’s hard for me to imagine that kind of creative instinct coming purely from datasets and prompts. I suspect a lot of it will still come from real conversations, real ears, real taste, and real people who’ve spent years living their craft.
So, for now, I’m feeling pretty good about the idea that there will continue to be creative folks at brands and agencies who recognize and value that difference. Optimistic? We shall see.