Data standardization for robust lip sync

13 Feb 2022  ·  Chun Wang ·

Lip sync is a fundamental audio-visual task. However, existing lip sync methods fall short of being robust to the incredible diversity of videos taken in the wild, and the majority of the diversity is caused by compound distracting factors that could degrade existing lip sync methods. To address these issues, this paper proposes a data standardization pipeline that can produce standardized expressive images while preserving lip motion information from the input and reducing the effects of compound distracting factors. Based on recent advances in 3D face reconstruction, we first create a model that can consistently disentangle expressions, with lip motion information embedded. Then, to reduce the effects of compound distracting factors on synthesized images, we synthesize images with only expressions from the input, intentionally setting all other attributes at predefined values independent of the input. Using synthesized images, existing lip sync methods improve their data efficiency and robustness, and they achieve competitive performance for the active speaker detection task.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here