Meet DigiDoug, the first digital human to give a TED talk in real time. 

DigiDoug is the virtual version of Dr. Doug Roble, senior director of Software R&D at Digital Domain, the award-winning visual effects studio behind the characters and visual effects for movies like The Curious Case of Benjamin ButtonMaleficent, Disney’s The Beauty and the Beast and Avengers: Endgame

Roble and Digital Domain’s Digital Human Group have presented DigiDoug at multiple events, showcasing their state-of-the-art digital human technology that’s driven by an inertial motion-capture suit and a single camera capture for facial animation. 

But to capture and recreate emotions and actions in real time, the Los Angeles-based studio turned to more powerful and advanced technology: machine learning and real-time rendering.

With NVIDIA RTX technology and Unreal Engine from Epic Games, Digital Domain is bringing photorealistic digital humans to life — all in real time.

The Machine (Learning) Behind Digital Humans
Creating realistic digital humans in film and television is known as the final frontier of visual effects. It typically takes thousands of hours and hundreds of artists to create a virtual human that contains all the physical details of a real-life person. 

With machine learning, Digital Domain can speed up the process while achieving an unprecedented level of photorealism in their characters.

To create DigiDoug, Digital Domain integrated machine learning into its creative process, but needed enormous amounts of data for the technique to work. So they started by taking thousands of images of Roble’s face using different angles and lighting stages to capture as much data as possible.

The team built a deep neural network that would take those images and learn all the details about Roble’s face — from the mix of expressions to physical details — so it could compute all the information that would allow DigiDoug to act like a real person. 

DigiDoug is then rendered in real time using NVIDIA RTX technology and Unreal Engine, the game engine developed by Epic Games. Roble’s movements and expressions are captured using a special motion-capture suit and camera. That data is transferred to the machine learning software, and then translated to the rendered digital human in real time. 

“Capturing all the details is what makes the digital human realistic,” said Roble. “What’s cool about machine learning is you can teach it to get better. With NVIDIA and Epic Games, Digital Domain is building the technology that allows artists to create the most photorealistic digital humans possible, and it’s all happening in real time.” 

For more information, watch the recent panel at SIGGRAPH on “Progress Towards Real-Time, Photorealistic Digital Humans,” moderated by Danielle Costa, vice president of visual effects at Marvel Studios. In addition to Roble, the talk features Simon Yuen, director of graphics at NVIDIA, and Vladimir Mastilovic, director of digital humans at Epic Games, who discuss the journey of digital humans and where the technology is headed.

Learn more about the latest NVIDIA technology by watching on-demand sessions or developer talks from SIGGRAPH.