Modality Dropout for Improved Performance-driven Talking Faces

27 May 2020Ahmed Hussen AbdelazizBarry-John TheobaldPaul DixonReinhard KnotheNicholas ApostoloffSachin Kajareker

We describe our novel deep learning approach for driving animated faces using both acoustic and visual information. In particular, speech-related facial movements are generated using audiovisual information, and non-speech facial movements are generated using only visual information... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper