Skip to main content
AI

Is video the next frontier for generative AI?

AI startups want to make inroads in Hollywood, but the technology still has its limits.
article cover

Francis Scialabba

6 min read

A little over a year ago, dozens of technologists and video production professionals met in a two-screen movie theater in Manhattan for the startup Runway’s first-ever film festival for videos produced with AI.

Darren Aronofsky, director of films like Black Swan, Requiem for a Dream, and The Wrestler, raved about the tech’s potential to change moviemaking, and a series of short films exhibited the power of AI visual effects.

But a lot has changed in the months between then and the upcoming second iteration of the event in May. Video-generation models like Runway’s Gen-2, Google’s Lumiere, and OpenAI’s Sora have reached new levels of realism, and the labor movement in Hollywood notched two big wins regarding AI’s role in the creative process.

“Last year, we had a really great [festival] that really sort of interrogated the future of filmmaking, looking at the tools that existed at the time and extrapolating that further, and so we’re really excited for the conversation that’ll happen this year, given how much has come to fruition already in the last 12 months,” Runway’s head of creative, Jamie Umpherson, said.

As generative AI that can produce realistic-seeming copy and imagery has rapidly evolved in the past couple years, some in the field have come to see models that can produce video whole cloth as a natural next frontier. But while still in its earliest stages, the tech is already raising questions around training on copyrighted material, labor issues, and disinformation potential.

Fast-forward

A fresh crop of tools has brought video generation to new heights in the last several months. Runway, which has long offered a hub of AI-powered creative tools, majorly updated its text-to-video and image-to-video generation tools, Gen-2, last November. In January, Google unveiled a similar video AI tool, Lumiere, which claims to offer more visual consistency.

A video of a sailboat moving down a river generated with Runway's Gen-2

Generated with Runway Gen-2, which was given the prompt "a sailboat moving down a river."

OpenAI next amazed many experts with the quality of the video that its new model, Sora, can purportedly produce, according to Gartner Distinguished VP Analyst Arun Chandrasekaran.

“[Sora] was something that really caught people by surprise,” Chandrasekaran said. “To be very candid with you, we kind of thought that what we’re seeing with Sora are the kinds of things that we would see in 2025 and beyond.”

Other startups, like Pika Labs, Stability AI, and Lightricks, have also unveiled AI video-generation tools in recent months.

Still, when asked about the technical limitations of AI video tools, Chandrasekaran said they seem to suffer from some of the same limitations as image generators: Rendering text within an image is still a mess, for example; the amount of training data available without copyright issues is likely somewhat limited, (many of these companies have been cagey in interviews and research papers about revealing training data); and they require a massive amount of processing power. Neither Google nor OpenAI have yet to make the models publicly available beyond a select few users.

Headed to Hollywood

Bloomberg reported that OpenAI has scheduled a series of meetings with Hollywood power players in Los Angeles this week as the startup seeks to encourage filmmakers to use Sora in their work.

AI has become a contentious issue in the film industry, especially since last year’s Writers Guild of America and SAG-AFTRA strikes brought questions around job replacement and credit for work to the forefront.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

On Monday, OpenAI published a blog post featuring some of the first videos produced by third-party creators in Sora’s pilot program. In a post on X, one former Stability AI exec responded to the announcement by accusing the company of “artistwashing,” which he defined as soliciting positive publicity from hand-picked creators while “training on people’s work without permission/payment.”

OpenAI isn’t the only AI startup looking to break into Hollywood. Runway has marketed certain tools that automate parts of the video editing and postproduction process to professional filmmakers. The company’s platform was notably used for VFX in Everything Everywhere All at Once, winner of the 2023 Academy Award for Best Picture.

Video generated with Runway's Gen-2 of a "person running on a track."

Generated with Runway Gen-2, which was given the prompt "a person running on a track."

Runway is working to ensure it keeps a dialogue open with artists as it forges further into video AI, Umpherson said. That means “having conversations with artists at all levels,” noting that the company has a program for creative partners designed to facilitate feedback and “help us understand which directions to push,” he said.

“The entire intention here is to augment the creative process,” Umpherson said. “For us, it’s really important to have conversations and work with artists to get the tools into their hands so they can start to experiment with them, understand what they’re capable of, and what the current constraints are.”

Video generated from the prompt "cat running on the street."

Generated with Runway Gen-2, which was given the prompt "a cat running on the street."

What’s the use?

Given that AI video tools are still in early stages, one might wonder what role they could actually play in the filmmaking process at this point. Lightricks, the company behind apps like Facetune and Videoleap, has pitched its suite of video AI tools as a way for creators to storyboard and ideate concepts.

Lightricks CEO Zeev Farbman said the company has clocked interest from moviemakers attempting to secure more funding from producers by pitching them on a vision and marketing agencies looking for ways to create less-expensive pitch concepts.

Farbman said that while video AI has often struggled to credibly convey, say, simple human interactions or emotions, it also tends to excel at otherwise expensive tasks like rendering the motion of waves and flames. “The surprising thing here is that some of the things that are easiest for AI are actually things that were historically very challenging for a classical 3D pipeline,” Farbman said.

Further down the line, Chandrasekaran imagines possible uses beyond the entertainment and media industry, like marketing and sales videos or creating simulations to train self-driving cars, for instance. Many AV companies already use algorithmically generated synthetic data to train systems.

“You’re now able to create simulation videos by using AI, that can actually be part of the training dataset to make these analytics algorithms better,” Chandrasekaran said. “For example, trying to imagine how San Francisco would look like at 4am in a cyclone is really hard…but now you can actually envision how that [looks].”

While Chandrasekaran emphasized that these are still early days for this kind of video generation, he predicts that experimentation on the enterprise side will start to pick up later this year.

“The maturity of the models means that maybe by late 2024, early 2025, we will kind of see, maybe, enterprise pilots around video generation,” Chandrasekaran said.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.