

Image credit: Open AI
On September 30th 2025, Open AI dropped Sora 2, its flagship model for generating video and audio. Regardless of your role in the industry, the update was impossible to ignore – and it was a biggun. Overnight, filmmakers, agencies, and curious creatives saw the promise and the peril of what just landed. Could this be the moment generative AI truly demands a redefinition of the creative pipeline?
Building on the foundations of its first iteration - released last year - Sora 2 promises “audio generation, more natural movements, cameo features, and improved safeguards”, solving many of the issues that plagued its predecessor. But that doesn’t guarantee it will transform things for the better. As critics warn, we could soon be bombarded by dazzling but soulless AI-generated content. Others, though, see this as a secret weapon for rapid ideation and visual storytelling at speed.
LBB’s April Summers asked a cross-section of industry insiders, from executive producers to founders of AI studios, where they see the breakthrough, the pitfalls, and whether Sora 2 can transcend being another flashy tech trend.
Orlando Wood, CEO of LA-based Koobrik Inc., was quick to share his thoughts on the new AI video generator on LinkedIn, so I decide he’s a good person to start directing my questions. A former producer turned tech consultant, Orlando founded KoobrikLabs, a ‘consultancy and engineering studio that builds AI and IT tools for creative companies’ to help the industry transition.
Despite very much having a horse in the race, he tells me he is generally unimpressed by Sora 2. “I actually can’t believe I’m so blasé about a tool that can make video in 90 seconds from a prompt,” he admits. “The threshold for what can amaze us has changed significantly in the last two years. A year ago, this would have thrilled and excited me. Now, it just feels like par for the course.”
Orlando also admits that while the current Sora 2 appears to be primarily churning out yet more AI slop, given the quality of the AI-powered video and audio generation system, there is the potential for the software to totally transform the state of play in the world of production. “Creative development will fundamentally change and production pipelines will change for certain categories of work,” he begins. “It’s going to completely change how ideas are developed, tested, and communicated. Right now, previsualisation and concept development are expensive, time-consuming processes that rely on storyboards, animatics, or rough CG. With video models, a director, producer, or agency creative could block a scene, explore tone, or iterate on visual style in hours instead of weeks.”
He predicts that such developments could fundamentally alter timelines and lower the barrier to what he calls ‘high-fidelity ideation’. “Smaller teams will suddenly be able to work at a scale previously reserved for major studios. That democratisation won’t replace creative leadership, but it will shift where creative energy gets spent: less on logistics and more on iteration and refinement,” he says.

Open AI CEO Sam Altman in Sora 2 promo film (image credit: Open AI)
With this foresight in mind, Orlando believes that Sora 2 is no match for human creativity. “These video models won’t replace filmmaking just like photography didn’t kill painting,” he says. “It created its own category and, if anything, kicked off the impressionist movement. When a technology can produce still lives better than any painter alive, the painter discovers his inspiration somewhere on the horizon. Where, we can’t yet tell. And, believe it or not, that’s good news. There will be many creative unintended consequences, but I don’t think any of them will be the cessation of human creativity.”
Rich Rama, EP and partner at US production company JANE, is similarly hesitant about throwing in the towel. After 25 years in the business, across both production and post-production, Rich has been trained to look for imperfections, and points out how Sora still has a way to go. “I would definitely catch that there was something different about it,” he says, after I share a recent example of a Sora 2 produced mock commercial with him. “Whether it’s the artefacts that AI creates, the aliasing, the ‘compression’ look, the audio being just slightly off… there are just things that bring up red flags.”
Rich knows his gaze is a lot more beady-eyed than the average consumer, and acknowledges that the newest Sora is undeniably ‘freaking amazing.’ “I keep hearing people call this a ‘tool’, and sure, it’s a tool, but it’s definitely a game changer of a tool that everybody needs to be watching and, if they have the capacity, to learn, because it's only going to get better.”
Toby Walsham, founder and CEO of AI studio Made By Humans, has also been observing the Sora 2 roll out closely. And, despite ‘living and breathing AI for years’, the latest app has had him second-guessing more often than he would like to admit. “What really impressed me with this update is the physics,” he explains. “There’s a clip floating around of a bear tumbling off a porch and the way the animal twists, catches itself, and finds its footing is astonishingly natural. In earlier tools you’d expect limbs clipping through each other or some bizarre morphing effect as the system tried to recover. Here, it looks like gravity actually matters. That’s a huge leap forward.”
Felipe Machado, studio director at Silverside AI, has also picked up on the improvement. “Physics and movement feel better than their earlier models which is a meaningful step forward,” he echoes. “Of course, there are still limitations: character and token ceilings mean long narratives require careful planning, and quality isn’t flawless yet compared to other models, as flickering and morphing can appear. And while the consumer app is 720p, the API already goes higher.”
Reflecting on its new features, Felipe’s highest praise goes to the system's ability to “prompt an entire film – with cuts, sound, everything – all in one go” which he believes is the most interesting development. “This feels like the closest step we’ve seen toward generating full, long-form narratives from a single prompt,” says Felipe. “You can now call out timing, camera, style, even sound cues, and the model mostly respects it.”
Prompt: Martial artist doing a bo-staff kata waist-deep in a koi pond (video credit: Open AI)
The studio director explains that this capability will prove to be especially powerful at scale, for example, when producing hundreds or even thousands of assets. “The editing it delivers is impressively coherent (even if a little formulaic), which will raise the bar for general creators and amateur content,” he reveals. “But for high-profile work, where originality and emotional precision matter, we’ll still see artists crafting and refining individual shots so the human eye can decide the perfect cut and rhythm.”
When Felipe brings up the need for human input, I wonder whether this is an opinion shared by everyone I've spoken to. So, I ask them, does Sora 2 pose a genuine direct threat to human creativity? Are we writing ourselves out of the narrative by embracing these tools? The response is near-unanimous: not a chance.
“Nothing can truly threaten human creativity, it’s too resilient,” Toby reassures me. “Even in the harshest environments, artists find ways to respond, improvise, and rebel. That intangible spark isn’t something you can package into a software feature. AI is brilliant at pattern-matching and remixing, but genuine creative leaps come from instinct, emotion, lived experience, or even happy accidents – things machines don’t have.”
In the 20-minute-long Sora 2 promo film, Open AI CEO Sam Altman tells viewers the AI-powered video generator is all about “creativity and joy”, and the latest iteration of the app is “the most powerful imagination engine ever built.” But, as Toby told LBB editor Laura Swinton Gupta a little over a month ago, “AI is not just prompting and writing something amazing. It’s not copywriting.”
The Made By Humans founder stands strong on this belief that AI’s potency depends entirely on who is behind the keyboard. “The ‘anyone can be an artist’ pitch is just clickbait or good marketing,” he asserts. “Sure, Sora and other platforms lower the barrier to entry, and that’s powerful. But lasting work requires iteration, taste, judgment, restraint, and intention – skills built over time. In advertising, there’s the added challenge of managing feedback and keeping consistency under pressure. AI can produce a scene from a prompt, but our artists, many of whom were directors or photographers before AI, bring years of experience in problem-solving and storytelling. They’re not operators pressing buttons – they’re directors shaping the work with vision and intent.”
Clair Galea, partner and executive producer at creative agency Courage, reiterates Toby’s point, doubling down on the crucial importance of human involvement – and artistic humans at that. “When tools like Sora 2 are used without intention or curiosity, they’ll inevitably produce disposable, trend-chasing content,” she says. “But when they’re guided by a real human perspective, they can actually spark new creative possibilities. So far, Sora 2 acts as a mirror for creativity. The software reflects the input and imagination behind it. In other words, what determines whether it adds to ‘AI slop’ or to culture is who’s using it and why.
“AI doesn’t threaten creativity – indifference does,” she concludes.
A new sharing platform has been unveiled as part of the Sora 2 launch, which Felipe believes could ‘flood feeds’, but certainly won’t replace human creativity altogether. “As execution gets faster, direction, taste and cultural intelligence still decide what matters,” he points out. “I’ve already seen great artists doing surprising, playful things only days after launch. And personally, as a Brazilian, I know what happens when Latin America gets hold of new creative tools: meme culture goes wild, internet culture gets inventive fast. I expect an explosion of new formats, new humour, new visual language. So yes, there will be brain-rot content, but I’m optimistic — this is fuel for creativity, not the end of it.”
When talking to Rich and Orlando, both also weigh in on the social-media-fication of it all, likening it to the early days of other social media apps. “It reminds me of Instagram back in 2012, " says Rich. “It was around then that creators started to morph into influencers within the platform. And it feels like that will happen with Sora 2, too.”
“I’m reminded of how it felt in the early days of Youtube, Facebook, Friendster, Myspace, and Instagram,” echoes Orlando. “Nobody was any good at the format and it all felt a little pointless. But we mustn’t forget that once creators came into the ecosystem they did new and unexpected things with the technology. Instagram is nearly unrecognisable from what it was in 2014. YouTube in 2008 was nothing like what it is today. And that’s down to creators who understand the medium and are able to express themselves within it. When the creators come to the platform, then we’ll know what this thing really is.”
So far, it has to be said, the output has been largely nonsensical, and I am curious about whether this could impact consumers’ appetite when it comes to content consumption? Could ‘mass slop’ result in a mass social media exodus?
“On average we scroll about 500 metres on our phones every day, that’s a lot of slop to wade through,” Toby points out. “Personally, I might stumble across one or two things of genuine interest, and the rest is just perfect doom-scrolling material for the bathroom break.
"So, I don’t think ‘mass slop’ will cause a social media exodus," he concludes. "If anything, we’re already addicted to the noise. But what it might do is sharpen our appetite for the rare things that do feel real, surprising, or human. In a sea of sameness, authenticity becomes a premium. And that’s where both live action and thoughtful AI work will have real value – not in producing more, but with clever creative that cuts through the slop.”
The point Toby raises about how this new wave of TikTok brain rot will fine-tune consumer appetites and attention-spans when it comes to more highly-produced, high quality creative, is then brought up by Orlando too. “Sora 2 will change how attention is distributed. We’re already living in a world where AI doesn’t just generate content, it filters it as well, and that’s key. Social platforms have shifted from ‘social media’ to ‘interest media.’ Most people don’t even see their friends’ posts anymore; they see what the algorithms decide they’ll like. If audiences get bored of AI sludge, it will simply get pushed down the feed. It’ll still exist, clogging up the cloud, but it won’t dominate attention if it's no good.”
“That said, the sheer volume of content will fragment attention further,” he adds. “When you can generate infinite, frictionless media, people scroll faster. We’ve already seen this with TikTok and Shorts – Sora 2 could accelerate that trend dramatically. Instead of a single 'mass audience' we’ll see even more 'micro-niches', each served by highly personalised algorithmic feeds. Some users may opt out entirely, but it’s more likely we’ll see increased stratification: some will retreat to curated, human-made spaces, while others immerse themselves in algorithmic streams.”
“In the end, attention follows incentives and algorithms follow attention,” Orlando warns, somewhat forebodingly.
Video credit: Open AI
After all the big-picture theorising, I want to know how these studios actually plan to use Sora 2, both in principle and in practice. Across the board, their answers are surprisingly pragmatic. For most, it’s less about surrendering to the machine and more about reframing what “the tool” can be.
Felipe sees the update as an extension of his existing workflow, as opposed to a revolution. “Every model has its own strong suits, and we go after whatever each one does best,” he reminds me. His team is already experimenting with Sora 2’s ability to “feed an entire hand-drawn storyboard into the model and have it interpret and turn that into a film”, a leap he says could accelerate rapid mock-ups, ideation, and creative exploration.
That same sense of experimentation comes up when I check in with Stefani Kouverianos, executive producer at COMMON GOOD, who describes Sora 2 as “a new kind of paintbrush”, not a replacement for craft, but a catalyst for new kinds of thinking. “When used effectively, AI can act as a ‘second you,’ a proxy with your taste level and tone of voice, backed by the collective knowledge of the internet,” she explains. “But it’s hard to imagine it replacing humans telling human stories.”
This sentiment is then echoed by Clair, who sees Sora 2 as a chance to push boundaries, not erase them. “If implemented thoughtfully, this technology could transform the production process by accelerating early-stage visualisation, helping creatives test and iterate ideas more efficiently, and giving teams more freedom to explore,” she says.
That being said, Clair also issues a quiet warning: speed isn’t the same as substance. “Tech can make production faster or more accessible,” she reminds me, “but it doesn’t inherently make the work better.” I see her point. In a landscape where anyone can generate infinite ‘content,’ discernment becomes the real currency.
Across all these conversations, a common thread emerges: AI’s power lies in amplifying the human process. And, as Stefani puts it, this might be the defining tension of our creative era. “When future generations look back, art and culture will be our legacy – the record of who we were and what mattered to us. AI will be part of that story, a reflection of our ingenuity and curiosity. But it’s just that: a reflection.”
She pauses, then adds what feels like the real thesis of this entire conversation: “The human impulse to tell stories, to find meaning, to connect – that’s not a system you can train.”