Face Reenactment
24 papers with code • 0 benchmarks • 1 datasets
Face Reenactment is an emerging conditional face synthesis task that aims at fulfilling two goals simultaneously: 1) transfer a source face shape to a target face; while 2) preserve the appearance and the identity of the target face.
Source: One-shot Face Reenactment
Benchmarks
These leaderboards are used to track progress in Face Reenactment
Latest papers
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image.
Deepfake Generation and Detection: A Benchmark and Survey
In addition to the advancements in deepfake generation, corresponding detection technologies need to continuously evolve to regulate the potential misuse of deepfakes, such as for privacy invasion and phishing attacks.
BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis
Synthesizing photorealistic 4D human head avatars from videos is essential for VR/AR, telepresence, and video game applications.
HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and Retarget Faces
In this paper, we present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity, driven by a target facial pose.
ReliableSwap: Boosting General Face Swapping Via Reliable Supervision
To avoid the potential artifacts and drive the distribution of the network output close to the natural one, we reversely take synthetic images as input while the real face as reliable supervision during the training stage of face swapping.
StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods.
Compressing Video Calls using Synthetic Talking Heads
We use a state-of-the-art face reenactment network to detect key points in the non-pivot frames and transmit them to the receiver.
Audio-Visual Face Reenactment
The identity-aware generator takes the source image and the warped motion features as input to generate a high-quality output with fine-grained details.
StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face Reenactment
In this paper we address the problem of neural face reenactment, where, given a pair of a source and a target facial image, we need to transfer the target's pose (defined as the head pose and its facial expressions) to the source image, by preserving at the same time the source's identity characteristics (e. g., facial shape, hair style, etc), even in the challenging case where the source and the target faces belong to different identities.
3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation
In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs.