Computer Vision News - May 2016

Research Face2Face: Real-time Face Capture and Reenactment of RGB Videos Every month, Computer Vision News reviews a research which has been published in our field. We do our best to choose one which you might have not read yet and you will find worth reading about. This month we have chosen to review Face2Face: Real-time Face Capture and Reenactment of RGB Videos , a research paper presenting a novel approach for real-time facial re-enactment of a monocular video. The proposed model is able to re-render the facial expressions of the target video by a source actor, recorded with a webcam. The paper will be presented at CVPR 2016 orals and you can read it here . We wish to thank the authors ( Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt and Matthias Nießner ) for kindly authorizing the use of their images. COMPUTER VISION NEWS 17 The proposed method includes four main steps (as shown in the figure below): The method setup is illustrated in the above figure. The input consists of two monocular video sequences: the first is a simple video (e.g. YouTube) representing the target actor to be re-enacted and the second belongs to the source actor (captured live with a commodity webcam). The output is a re-render video in which the facial expressions of the target actor are animated by a source actor in a photo-realistic fashion. Face2Face: the method 1. Offline reconstruction of the facial identity: the shape identity of the target actor is reconstructed with a new global non-rigid model-based bundling approach. The model parameters are: shape, albedo, illumination and the camera perspective projection. For efficiency the parameters are estimated for k selected keyframes of the target video.

RkJQdWJsaXNoZXIy NTc3NzU=