You may now copy the expression of one person’s face from their video to another face and create realistic animation video.
A team of researchers from the Technical University of Munich, the University of Bath, the Max Planck Institute for Informatics, Stanford University and Technicolor have come up with a deep learning based system that can transfer the full 3D head position, facial expression, eye gaze and even eye blinking from a source actor to a target actor.
Photo-realistic reanimation of portray videos
To transfer expressions from source actor to target actor, an input video is needed. From this input video, the AI first track the source and target actor using face reconstruction approach. In this approach, the target parameters are illumination, identity, head pose, expressions and eyes. All these parameters are copied in a relative manner from the source to target sequence with respect to a neutral frame. After that, it renders synthetic conditioning images of the target actor’s face model under the modified parameters using hardware rasterisation.
The realism in the videos is achieved by careful adversarial training resulting in the creation of modified target videos that mimics the behaviour of the synthetically-created input. The system offers an ability to freely combine source and target parameters due to which researchers were able to demonstrate a large variety of reconstructed videos without explicitly modelling hair, body or background. For example, one can re-enact full head using interactive user-controlled editing and realise high-fidelity visual dubbing.
In their paper, the researchers concluded, “We have shown through experiments and a user study that our method outperforms prior work in quality and expands over their possibilities. It thus opens up a new level of capabilities in many applications, like video reenactment for virtual reality and telepresence, interactive video editing, and visual dubbing. We see our approach as a step towards highly realistic synthesis of full-frame video content under control of meaningful parameters. We hope that it will inspire future research in this very challenging field.”