Gen AI Video Limbo: Why Studios Are Still Uncertain

Illustration of Caution Tape spelling out AI
Illustration: Variety VIP+

In this article

  • Major Hollywood and VFX studios remain unresolved regarding generative imagery onscreen in film and TV
  • Industry restraint is due to copyright uncertainty, lack of assurance about data security and inadequate tools
  • Early discussions are focused on creating a set of evaluation criteria and AI models and software approved for production use

While Hollywood has taken steps toward generative AI video in film and TV production — most publicly borne out by Lionsgate’s partnership with Runway — the industry is still largely mired in a state of limbo, according to several VIP+ conversations with sources at prominent VFX studios. Every major Hollywood and VFX studio is mutually trying to figure out what’s okay versus not okay to do.

Film, TV and VFX studios currently see the following concerns currently preventing more use of AI imagery:

1. Copyright ambiguity: Industrywide restraint toward AI video has primarily stemmed from profound uncertainty about the still ambiguous legal status of generative AI.

VIP+ has referred to this uncertainty as gen AI’s copyright conundrum. Studios still can’t be definitively sure that using an AI model or tool trained on unlicensed copyrighted material won’t itself constitute copyright infringement. Second, AI outputs cannot on their own be copyrighted without subsequent editing, though it’s unclear how much editing would be required to be considered copyrightable by the U.S. Copyright Office.

AI developers don’t typically publish data sources used for diffusion model training. This nontransparency has made it impossible for enterprises trying to operate responsibly to evaluate models or tools for infringement risk, meaning they’re off-limits by default.

“One of the big problems we have with generative imagery is that large diffusion models require vast datasets, and the technology companies are not divulging what those datasets are,” a VFX source told VIP+. “Various court cases still need to go through to tell us what is okay and what isn’t okay. While that’s going through, we have clients on the film side who don’t want to go anywhere near it until some clarity is brought to the landscape.”

2. Data security: Studios need any data they create or upload to video generation software to be completely siloed and secure, meaning they wouldn’t be comfortable using cloud-based services or having any studio data fed back into the model for training purposes (even though it’s likely studio content has already been used to train these models without permission).

For example, sources agreed studios would avoid Chinese video models specifically due to unknown data security risks. Any actor footage couldn’t go near any cloud-based generative video tool, a VFX source told VIP+. “Suppose we had a shot that included an A-list actor — that would be an immediate [no-go] ‘we can’t do that.’ ” 

3. Tool performance: Despite significant improvement, video generation is simply still not good enough for most film and TV production use cases. Sources noted lack of sufficient image quality and controllability.

“Even if the legal ambiguity was resolved and we could use [generative video] across the board, the lack of control in the current round of models would make it challenging in the majority of use cases,” said a VFX source. “We run tests on things, because a lot of stuff makes for a cool video on YouTube but when you try and use it in production, it doesn’t stack up for various reasons.” Even where video generation has been used on client work, it’s been sparingly, such as only to create “generic” types of shots (e.g., establishers) in a commercial or add movement to a matte painting.

Industry uncertainty toward generative image and video has been especially challenging for VFX. VFX studios have suddenly needed to build internal expertise on fast-emerging and changing AI models, tools and techniques. Some have further taken it upon themselves to “get into the weeds” and become deeply informed on U.K. and U.S. copyright law and data provenance in order to be able to critically evaluate the respective legality or data-related risks of different AI and appropriately guide clients.

For example, one VFX studio source has set up an AI task force of domain experts across the company to track and test any new developments to determine the stance toward a given model or tool. “We bring things in front of them and say this is something people are asking to use in production, here’s the data it trained on, the licensing of the model. Is it OK, is it not OK, so we can give a rationale as to why we can or can't use it,” a VFX supervisor told VIP+.

But VFX studios have also had to contend with “wildly” varying risk tolerance levels toward gen AI among clients, tailoring their approaches to individual client preferences across a wide range of brands or film and TV studios.

While brand and advertising clients have been more open to allowing gen AI in final content deliverables — with some even requesting it, enthusiastic about exploring its new creative possibilities — the film and episodic client side has been far more restrictive. Yet even studios aren’t monolithic and would likely take different stances from one studio to the next.

“Every studio is trying to figure out what their line is in this new landscape,” said a VFX source. “Some studios don’t want us to go anywhere near it until the [legal ambiguity] is sorted. Other studios want us to disclose what we’re doing but don’t rule it out entirely just yet.”

The industry needs more definitive clarity on which models or tools are acceptable to be used, and which aren’t, to help studios responsibly navigate what one source described as a “complete minefield.”

Studios have been internally discussing their own stances. More recently, conversations are also starting up between studios, VFX houses and industry groups, who are pushing for development of a clear, stated and agreed-upon set of criteria establishing an AI model or software tool as approved for production use.

“This stuff is going to be used, but at the moment, we’re in a bit of a gray area. We need that code of conduct, agreed across the industry about what is okay and not okay, a list of rules that we’re going to apply to things and an approved list of tools,” said the source, adding that such a list could come from an industry organization such as the Academy Software Foundation (ASWF).

Even so, producing any such list for the industry could drag or stalemate for all the same reasons gen AI has already confounded studios. It seems hard to expect unanimous agreement when studios have different priorities and risk tolerances; it seems hard to define what’s okay when the law isn’t settled; and it seems hard to approve any image or video model when AI developers don’t share where the training data came from or licenses governing model use are ambiguous or hard to interpret.

Some AI already clears the bar, avoiding or minimizing risks. For example, “clean data” AI image and video models — trained exclusively on owned or opt-in licensed data, with no unlicensed copyrighted material — where developers fully disclose data sources and indemnify end users, would minimize any perceived liability risk around model training.

Some longstanding and noncontroversial machine learning techniques that have been used for years should also be deemed OK, such as image de-noising and even face-swapping, where human actor data is siloed and their performances drive the onscreen visuals. Generalized confusion and controversy about generative AI has led some clients to question even these techniques as well and “tar them with the same brush,” one VFX source said. The AI systems raising the most doubt are image and video generators.

“I hate the place we’re at right now, where there’s even a question of is it right or wrong to use a tool?” said Daniel Barak, VP and global executive director at R/GA. “The legalities have to be figured out. There's no world where it's just going to end. Like, ‘Oh, sorry, we couldn't solve it. You’ll never be able to use this magic.’ You can't leave that much power on the table and just basically rule it out as something you can't use.”

Variety VIP+ Explores Gen AI From All Angles — Pick a Story

More on AI video generation from VIP+ ...

Video generation model evaluation: Veo 2, Sora, Pika 2.0, Ray2
Veo 2’s giant leap, according to experts and DeepMind developers