Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. We proposed a novel framework which addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. Our framework can be applied to a variety of object categories and be extended to other tasks such as unsupervised co-part segmentation for image swaps.
- A Siarohin, S Lathuilière, S Tulyakov, E. Ricci, N Sebe, Animating arbitrary objects via deep motion transfer, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2019. (Spotlight Oral, 8% acceptance rate)
- A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, N. Sebe, First Order Motion Model for Image Animation. NeurIPS 2019.
- W Wang, X Alameda-Pineda, D Xu, E Ricci, N Sebe. Learning How to Smile: Expression Video Generation with Conditional Adversarial Recurrent Nets, IEEE Transactions on Multimedia, 2020.