3D Face Animation

21 papers with code • 3 benchmarks • 6 datasets

Image: Cudeiro et al

Most implemented papers

FaceFormer: Speech-Driven 3D Facial Animation with Transformers

EvelynFan/FaceFormer CVPR 2022

Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data.

3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation

junshutang/3DFaceShop 12 Sep 2022

In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs.

CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior

Doubiiu/CodeTalker CVPR 2023

In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook, which effectively promotes the vividness of the generated motions by reducing the cross-modal mapping uncertainty.

FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning

galib360/facexhubert 9 Mar 2023

This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method that allows to capture personalized and subtle cues in speech (e. g. identity, emotion and hesitation).

MMFace4D: A Large-Scale Multi-Modal 4D Face Dataset for Audio-Driven 3D Face Animation

why986/VFA 17 Mar 2023

Upon MMFace4D, we construct a non-autoregressive framework for audio-driven 3D face animation.

Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation

fedenoce/s2l-s2d 2 Jun 2023

This paper presents a novel approach for generating 3D talking heads from raw audio inputs.

SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces

psyai-net/SelfTalk_release 19 Jun 2023

To enhance the visual accuracy of generated lip movement while reducing the dependence on labeled data, we propose a novel framework SelfTalk, by involving self-supervision in a cross-modals network system to learn 3D talking faces.

Speech-Driven 3D Face Animation with Composite and Regional Facial Movements

wuhaozhe/audio2face_mm2023 10 Aug 2023

This paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech-driven 3D face animation.

FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion

uuembodiedsocialai/FaceDiffuser 20 Sep 2023

In addition, majority of the approaches focus on 3D vertex based datasets and methods that are compatible with existing facial animation pipelines with rigged characters is scarce.

FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models

shivangi-aneja/FaceTalk 13 Dec 2023

We propose a new latent diffusion model for this task, operating in the expression space of neural parametric head models, to synthesize audio-driven realistic head sequences.