Face Reenactment
24 papers with code • 0 benchmarks • 1 datasets
Face Reenactment is an emerging conditional face synthesis task that aims at fulfilling two goals simultaneously: 1) transfer a source face shape to a target face; while 2) preserve the appearance and the identity of the target face.
Source: One-shot Face Reenactment
Benchmarks
These leaderboards are used to track progress in Face Reenactment
Most implemented papers
AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment
We present a novel Animation CelebHeads dataset (AnimeCeleb) to address an animation head reenactment.
AI-generated characters for supporting personalized learning and well-being
Advancements in machine learning have recently enabled the hyper-realistic synthesis of prose, images, audio and video data, in what is referred to as artificial intelligence (AI)-generated media.
Initiative Defense against Facial Manipulation
To this end, we first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom.
Finding Directions in GAN's Latent Space for Neural Face Reenactment
Moreover, we show that by embedding real images in the GAN latent space, our method can be successfully used for the reenactment of real-world faces.
Thin-Plate Spline Motion Model for Image Animation
Firstly, we propose thin-plate spline motion estimation to produce a more flexible optical flow, which warps the feature maps of the source image to the feature domain of the driving image.
3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation
In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs.
StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face Reenactment
In this paper we address the problem of neural face reenactment, where, given a pair of a source and a target facial image, we need to transfer the target's pose (defined as the head pose and its facial expressions) to the source image, by preserving at the same time the source's identity characteristics (e. g., facial shape, hair style, etc), even in the challenging case where the source and the target faces belong to different identities.
Audio-Visual Face Reenactment
The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details.
Compressing Video Calls using Synthetic Talking Heads
We use a state-of-the-art face reenactment network to detect key points in the non-pivot frames and transmit them to the receiver.
StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods.