MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation

ECCV 2020  ·  Kaisiyuan Wang Qianyi Wu Linsen Song Zhuoqian Yang Wayne Wu Chen Qian Ran He Yu Qiao Chen Change Loy ·

The synthesis of natural emotional reactions is an essentialcriteria in vivid talking-face video generation. This criteria is nevertheless seldom taken into consideration in previous works due to the absence of a large-scale, high-quality emotional audio-visual dataset... To address this issue, we build the Multi-view Emotional Audio-visual Dataset(MEAD) which is a talking-face video corpus featuring 60 actors and actresses talking with 8 different emotions at 3 different intensity levels. High-quality audio-visual clips are captured at 7 different view angles in a strictly-controlled environment. Together with the dataset, we release an emotional talking-face generation baseline which enables the manipulation of both emotion and its intensity. Our dataset will be made public and could benefit a number of different research fields including conditional generation, cross-modal understanding and expression recognition. read more

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here