Like many high-flying startups, Runway ML started out as a pet project of college students.
Now, it’s evolved into a video-editing company making waves in generative AI.
In 2021, members of Runway ML’s research team coauthored a paper that would help kickstart Stable Diffusion and generative AI models like it. The influence of that paper—plus clients like New Balance and CBS’s The Late Show With Stephen Colbert—led to even more growth. Last year, Runway’s three-year beta product graduated to an enterprise-level offering, and in December, Runway raised $50 million in funding at a $500 million valuation. The company declined to share current revenue or growth numbers.
As business interest in generative AI surges, here’s a look at how some organizations are using this particular tool to automate parts of the video-editing process.
AI tools killed…the editing star?
A member of the VFX team behind at least one Oscar-nominated film—Everything Everywhere All At Once—made use of the company’s tools.
Evan Halleck, a VFX artist who worked on the film, told us he used Runway’s Green Screen tool to automate the process of rotoscoping, or tracing over a subject’s outline, frame-by-frame, in order to pair them with a new background.
“I only used Runway on the tail end of Everything Everywhere All At Once,” Halleck told us in an interview, adding, “Unfortunately I found it towards the tail end, otherwise I probably would’ve used it more, but I mainly used it for the rock scenes—they were shot on a rig that was kind of pushing them, and we had to digitally remove them. So I had to cut out characters and recreate a new background…Cutting them out by hand was pretty difficult—with sand and things moving, it was just hard to get a cut that looked really nice.”
Since that film, Halleck has used Runway for multiple other projects, he said.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
“[Rotoscoping] is famously a painstaking process—back in the day, before AI tools and machine learning…it would take days for a five-or-six-second shot, sometimes, because you’re going through 23 frames per second and cutting things out every single frame,” Halleck said, adding, “[For] a pilot I’ve just been working on, I think [using Runway] has probably saved me, in the grand scheme of things, a week or two of time.”
Video editors on The Late Show with Stephen Colbert have also used Runway’s AI tools for rotoscoping—a process that typically could take the team five hours for one shot by hand, compared to five minutes with software, according to a case study published by the company.
Runway’s research scientists work alongside its video team, and the deployment, infrastructure, and training of its 35 tools is all done in-house, according to Cristóbal Valenzuela, co-founder and CEO of Runway. The company’s offerings fall into two main categories—generative AI for video and automation for video editing—and it has different subscription levels, such as a Pro membership, which costs $144 per user annually, and an enterprise-level Team membership, priced at $336 per user annually.
When asked about potential competition from Big Tech companies making generative AI video tools—like Meta’s Make-A-Video and Google’s Imagen Video—Valenzuela replied that a product and a model are “not necessarily” the same, and that Runway specializes in building products.
With competition in the generative AI space at an all-time high—the sector raised $1.4 billion last year, per Pitchbook—the company is leaning on research and product integration to set it apart, according to Valenzuela.
“You don’t have researchers siloed,” Valenzuela said, adding, “[At] the same table, a research scientist is working on a novel technique for video generation, working with someone who’s been working on videos for 20 years. The level of understanding, conversations, and insights that both gain is very unique.”