News

Flawless at SIGGRAPH 2024

19th August 2024 Blog

 

Estimated Reading Time: 4 minutes

This blog post highlights Flawless’ participation at SIGGRAPH, where we showcased HQ3DAvatar, a new technique for creating ultra-realistic digital human faces. This technology, which uses advanced rendering techniques, has the potential to revolutionize filmmaking by making digital characters more lifelike and expressive, saving time and money in film production.

Why SIGGRAPH Matters

In the constantly evolving world of computer graphics and interactive technologies, SIGGRAPH is the premier event where the latest advancements and innovations are unveiled. For Flawless, attending SIGGRAPH is a significant annual milestone in our journey to revolutionize filmmaking with AI.

Why SIGGRAPH Is Important for Filmmakers

SIGGRAPH is a must-attend event for filmmakers. It’s where the best minds in the industry reveal new techniques and tools that can change how movies are made. From stunning special effects to realistic characters and immersive virtual worlds, the innovations at SIGGRAPH often set the tone for the future of visual storytelling in films, video games, virtual reality, and more. It’s a place where creativity thrives, new ideas emerge, and important connections are made.

For us at Flawless, SIGGRAPH is also crucial for attracting top talent. It’s an opportunity to bring together experts from both industry and academia to work on groundbreaking projects.

Meet HQ3DAvatar

This year we were proud to present HQ3DAvatar: High Quality Controllable 3D Head Avatar [video], a paper published by Flawless scientists and world-leading researchers K. Teotia, Mallikarjun B R, X. Pan, Hyeongwoo Kim, Pablo Garrido, Mohamed Elgharib, and Christian Theobalt. This project epitomizes Flawless’ commitment to pushing the boundaries of what’s possible in generative AI.

Let’s take a deeper look at this groundbreaking science and the impact it’s making on the film and TV landscape.

Multi-view Volumetric Rendering Techniques

Creating 3D models of human heads that are both realistic and dynamic has long been a challenge. Traditional methods often struggle with capturing intricate details such as the inside of the mouth, hair, and the subtle changes in head shape during movement.

 

 

Above: An image showing poor quality results using alternative rendering techniques

Flawless’ approach introduces a novel solution. We use advanced multi-view volumetric rendering techniques to create digital head avatars that closely resemble real people.

Note: Advanced multi-view volumetric rendering is a way to create and view 3D images, like a virtual object or scene, from different angles. For example, imagine watching a 3D scan of a heart in a medical app, where you can rotate and explore it from all sides as if you were holding it in your hands—that’s volumetric rendering in action. We use this technology and apply it to human heads.

The magic lies in employing a special type of mathematical function known as an implicit function, which is controlled by a neural network and informed by a monocular video of a subject. This function learns a standard way to describe heads, making it easier and faster to train models and create high-quality images.

How It Works

The process begins with regular video footage of a person’s face. A component of the neural network analyzes this footage to understand what makes that face unique. This deep learning allows the avatar to mimic facial expressions with remarkable accuracy and realism.

We’ve made significant steps towards reducing the ‘uncanny valley’ effect (the unsettling feeling people get when something looks almost, but not quite, human – Our goal is to bridge this gap), utilizing a method involving multi-view imagery and optical flow — a technique that allows us to compare images accurately at the pixel level. 

This approach helps create avatars that look more natural and realistic, even when changing expressions or moving. However, it’s important to note that we’re still working on perfecting certain aspects. For instance, we’re improving eye movement consistency with the input video and refining how we model hair dynamics. We’re also continuously enhancing our ability to capture subtle nuances during expression transfer.

The Benefit

Our technique excels at handling complex facial expressions and can generate images from various angles in real-time without losing quality. It outperforms existing methods both visually and quantitatively, offering a significant leap forward in realism and functionality, which has wide applications across the movie industry.

Impact on Filmmaking

The implications for filmmaking are profound. Filmmakers can now create digital characters that not only look incredibly real but also move and emote more naturally than ever before. This advancement has the potential to save time and money in production, and open up new ways for great stories to reach the global audience they deserve.

Our work at SIGGRAPH is a testament to our science and software. By continuing to push boundaries and explore new frontiers, we’re shaping the future of visual storytelling, one frame at a time.