Conference Tools of Tomorrow Digital Humans
ONLINE Raum Reutlingen Friday, May 06, 22:30

The Neural Rendering of THE CHAMPION

The Champion is the first full feature film to be neural rendered to allow the actors to be converted from performing in German, to performing in English. In this Polish film, set in the second world war, the entire production was filmed and finished in German. Using advances in AI and machine learning the entire film has the actor’s faces replaced by an inferred version, visually built from actors re-recording dialogue in a sound studio.

The film tells the story of pre-war boxing champion Tadeusz "Teddy" Pietrzykowski, who in 1940 arrives with the first transport of prisoners to the newly created Auschwitz concentration camp. The story was filmed without any consideration of later dialogue replacement. 

This talk discusses the important production issues involved in doing professional neural rendering on such a large scale involving hundreds of shots. Smaller projects have been shown, but these often involve hours per shot and massive manual intervention. Based on new technology, The Champion used only the footage already edited for the final film, combined with a robust and non-intrusive recording of the actors delivering the lines in English.

We will discuss the innovations in technology and provide insights into further advances that the team are working on. Including:

            • How the team found a general solution to actor re-capturing that was quick to set up and non-invasive to the actor’s process. Our solution involved only 5 cameras and no tracking markers, per shot or per scene lighting, and no special camera calibration.

            • The workflow that was developed to remove the need for collecting massive amounts of ML training data.

            • Our professional pipeline allows for any film to be processed, accepting that most films would have no access to special calibration clips, filming or even outtakes from the main unit. The process relies only on the finished film and the additional audio session recordings.

            • The film was adapted instead of dubbed with the input and cooperation of the Director, the actors and the creative team. But it could be adapted to replacing dubbing on actors who are no longer available or even alive.

The objective was to produce a robust pipeline that was more than a demo that would only work with extensive and time-consuming manual intervention.

Our production solution needed to allow for 

• vastly varying camera angles, (not just the common face swapping approach where the talent is facing camera),

• dramatic changes in lighting, contrast and camera artifacts, and

• no access to lengthy or specially shot training data of the final scenes

• actors with varying facial appearance over the film, in our case, a boxer who at times is severely bruised and beat up.  

The talk consists of Dr Hao Li, founder and CEO of Pinscreen in LA, with help from VFX supervisor Dr Mike Seymour. With the extensive use of visual and clips, We will discuss both the lessons learnt and the advances in Machine Learning that allow the wide scale adoption of this technology in place of traditional dubbing or sub-titles.

Hao Li, Pinscreen