A crescendo of debate has accompanied the recent rise of AI in post production. But how is it actually being used?
In this article, we’ll cover AI’s use in the crafts of post. As workflows get quicker, storytelling gets tighter, and delivery gets smoother, it’s important that industry professionals know how to use these tools responsibly—and creatively.
One such industry professional is Eric Wilson, Senior Director of VFX and Image Pipeline at Flawless, Visual Effects Society Board Member, and a veteran of large-scale post production innovation.
Wilson joined Flawless from another company, where he oversaw a groundbreaking project to deliver the world’s fastest, highest-resolution and highest frame-rate file ever. Wilson has been interviewed to help guide this article.
ADR has long been an essential technique of post production, where new or replacement lines of dialogue are added due to changes in story or problems with on-set sound. Often, this has meant editors trying to use alternative angles or cutaways to avoid the audience noticing any change.
“It was simply too costly and too time-consuming to actually go in and re-articulate people’s mouths to say a new line,” says Wilson.
“So, what everybody did instead was cut around it—they cut to behind the shoulder, or to a place where the dialogue could be delivered off-screen. But very rarely did anyone actually go in and try to re-articulate the mouth to new dialogue.”
But recent developments in AI are bringing about a change in the way ADR is done. The AI-assisted version of ADR is called visual ADR. And it is bringing about many possible use cases. For example:
Were it around at the time, visual ADR for censorship editing could well have come on Galaxy Quest (1999), when Sigourney Weaver shouts, “Well screw that!”.
Wilson calls it an evolution of the art form, not a replacement.
“We’re not replacing something that somebody already did. We’re actually opening up a new channel for people to be able to complete work.”
This new “AI dialogue polish” is entirely non-destructive, meaning it sits on top of traditional workflows rather than replacing them. As Wilson explains, “It actually makes ‘fix it in post’ even a little bit broader than it used to be.”
Now, that moment the director wishes had just one more line? It’s available—ethically, and within the existing creative process.
We’re seeing a transformation in the way motion pictures and series are dubbed, too.
As streamers release content to global audiences, studios are increasingly making use of AI-assisted translation and sync tools to maintain emotional nuance across languages and audience engagement.
One example of this is the recent film Watch the Skies, which used Flawless’ TrueSync™ technology to create an English-language version of the (originally Swedish) film UFO Sweden.
If you’re a Prime member in the US or UK, you can now stream Watch the Skies for free.
Wilson’s emphasis on ethics is key here. Prior to joining the company, while carrying out research, Wilson’s research revealed:
“Flawless was one of the only companies that said we are ethical around our treatment of AI. We are using sourced material that we’re either licensed to use, or we have actual permission from the actors themselves.”
In a year where performer consent has become a defining industry issue, this approach stands apart. The best AI localization workflows in 2025 are those that support the work of real voice artists—ensuring precision, performance, and respect.
From film scans to 8K footage, AI is restoring frames that were once considered lost causes.
Models trained for denoising, deblurring, and upscaling now allow archival material to sit seamlessly alongside modern digital footage.
“Flawless is outputting ACES EXRs,” says Wilson.
“That means they’re working in a theatrical color space—which a visual effects artist will tell you is the only way to actually get the work done that we do.”
In other words, quality still matters.
AI is now part of every modern NLE, speeding up everything from dialogue transcription to scene search and audio cleanup. Editors can find moments, fix mistakes, and finish faster—without losing creative control.
Wilson sees this as part of the “addition, not substitution” mindset.
“What we’re doing is an addition to traditional filmmaking. It’s non-destructive to the existing process.”
These tools don’t change what editors do—they remove friction, freeing time for storytelling and collaboration.
There’s a rapid evolution happening in the handoff between offline and online editing—a stage once defined by rigid sequencing.
What has always been a stop-start process—picture lock first, polish later—is now a faster, AI-assisted continuum where creative and technical decisions move in parallel.
In the offline stage, editors can explore freely within a non-destructive environment. Using DeepEditor™ review vubs, they can test alternate takes, re-time performances, or adjust pacing without committing to a final render. It’s the modern equivalent of creative sketching—fast, collaborative, and reversible.
Wilson describes this as “where the magic happens—the part where you can be wrong a thousand times before you’re right once.” Offline editing powered by AI transforms what used to be a series of locked cuts into living, malleable story moments.
Once creative intent is locked, the process moves to online editing—the finishing stage. Here, all assets are conformed at maximum output quality, color is finalized in ACES or theatrical EXR pipelines, and any review vubs are elevated to final theatrical-quality vubs.
AI ensures that every frame meets delivery standards while preserving the creative integrity of the offline phase.
This evolution in workflow means filmmakers can iterate more boldly during the edit—and still deliver at the highest professional fidelity once sign off is secured.
Ever noticed a rogue frame or distracting element break the flow of a shot? AI inpainting tools can now remove unwanted details across moving footage—automatically preserving perspective, texture, and continuity.
Advanced systems like DeepEditor™ even handle occlusions, motion blur, and difficult lighting conditions, maintaining seamless realism across complex sequences.
What once required high-end VFX pipelines is now within reach for smaller studios and indie creators. Still, ethical application remains essential: these models should refine and restore footage, not alter performance or intent.
Sound design has also become one of AI’s most powerful use cases in post-production. Neural models can now separate stems, recreate missing ambience, and even synthesize Foley that blends seamlessly with recorded environments.
AI-assisted DAWs allow mixers to adjust the spatial feel of a room or re-balance dialogue without re-recording. Combined with post-aware models, AI can recompose background layers that follow on-screen movement, maintaining realism and emotional tone.
For many in post, this is the next great leap: expanding audio editing, not replacing it.
What ties these innovations together is intention. AI in 2025 has made post-production more powerful, more precise, and more human.
As Wilson puts it, “We’re not replacing something that somebody already did. We’re actually opening up a new channel for people to be able to complete work.”
That’s the heart of AI in post-production today: a non-destructive, permission-based, creative toolset that lets filmmakers realize the story they meant to tell—all while preserving the artistry that got them there.