Visual Speech Recognition for Multiple Languages in the Wild

26 Feb 2022  ·  Pingchuan Ma, Stavros Petridis, Maja Pantic ·

Visual speech recognition (VSR) aims to recognize the content of speech based on lip movements, without relying on the audio stream. Advances in deep learning and the availability of large audio-visual datasets have led to the development of much more accurate and robust VSR models than ever before. However, these advances are usually due to the larger training sets rather than the model design. Here we demonstrate that designing better models is equally as important as using larger training sets. We propose the addition of prediction-based auxiliary tasks to a VSR model, and highlight the importance of hyperparameter optimization and appropriate data augmentations. We show that such a model works for different languages and outperforms all previous methods trained on publicly available datasets by a large margin. It even outperforms models that were trained on non-publicly available datasets containing up to to 21 times more data. We show, furthermore, that using additional training data, even in other languages or with automatically generated transcriptions, results in further improvement.

PDF Abstract

Results from the Paper


 Ranked #1 on Lipreading on GRID corpus (mixed-speech) (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Lipreading CMLR CTC/Attention CER 9.1% # 1
Lipreading GRID corpus (mixed-speech) CTC/Attention Word Error Rate (WER) 1.2 # 1
Lipreading LRS2 CTC/Attention Word Error Rate (WER) 32.9% # 6
Lipreading LRS2 CTC/Attention (LRW+LRS2/3+AVSpeech) Word Error Rate (WER) 25.5% # 4
Lipreading LRS3-TED CTC/Attention (LRW+LRS2/3+AVSpeech) Word Error Rate (WER) 31.5 # 7

Methods


No methods listed for this paper. Add relevant methods here