3D Face Animation
21 papers with code • 3 benchmarks • 6 datasets
Image: Cudeiro et al
Datasets
Latest papers with no code
DiffusionTalker: Personalization and Acceleration for Speech-Driven 3D Face Diffuser
To address the above limitations, we propose DiffusionTalker, a diffusion-based method that utilizes contrastive learning to personalize 3D facial animation and knowledge distillation to accelerate 3D animation generation.
DF-3DFace: One-to-Many Speech Synchronized 3D Face Animation with Diffusion
We contribute a new large-scale 3D facial mesh dataset, 3D-HDTF to enable the synthesis of variations in identities, poses, and facial motions of 3D face mesh.
Learning Audio-Driven Viseme Dynamics for 3D Face Animation
We show that the predicted viseme curves can be applied to different viseme-rigged characters to yield various personalized animations with realistic and natural facial motions.
Sparse to Dense Dynamic 3D Facial Expression Generation
This allows us to learn how the motion of a sparse set of landmarks influences the deformation of the overall face surface, independently from the identity.
Learning Speech-driven 3D Conversational Gestures from Video
We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures, as well as 3D face and head animations, of a virtual character from speech input.