Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Talking Head Generation VoxCeleb1 - 1-shot learning Few-shot Adversarial Model FID 43.0 # 1
Talking Head Generation VoxCeleb1 - 32-shot learning Few-shot Adversarial Model FID 29.5 # 1
Talking Head Generation VoxCeleb1 - 8-shot learning Few-shot Adversarial Model FID 38.0 # 1
Talking Head Generation VoxCeleb2 - 1-shot learning Few-shot Adversarial Model FID 48.5 # 2
Talking Head Generation VoxCeleb2 - 32-shot learning Few-shot Adversarial Model FID 30.6 # 1
Talking Head Generation VoxCeleb2 - 8-shot learning Few-shot Adversarial Model FID 42.2 # 2

Methods


No methods listed for this paper. Add relevant methods here