Auto-AVSR: Audio-Visual Speech Recognition with Automatic Labels

Audio-visual speech recognition has received a lot of attention due to its robustness against acoustic noise. Recently, the performance of automatic, visual, and audio-visual speech recognition (ASR, VSR, and AV-ASR, respectively) has been substantially improved, mainly due to the use of larger models and training sets. However, accurate labelling of datasets is time-consuming and expensive. Hence, in this work, we investigate the use of automatically-generated transcriptions of unlabelled datasets to increase the training set size. For this purpose, we use publicly-available pre-trained ASR models to automatically transcribe unlabelled datasets such as AVSpeech and VoxCeleb2. Then, we train ASR, VSR and AV-ASR models on the augmented training set, which consists of the LRS2 and LRS3 datasets as well as the additional automatically-transcribed data. We demonstrate that increasing the size of the training set, a recent trend in the literature, leads to reduced WER despite using noisy transcriptions. The proposed model achieves new state-of-the-art performance on AV-ASR on LRS2 and LRS3. In particular, it achieves a WER of 0.9% on LRS3, a relative improvement of 30% over the current state-of-the-art approach, and outperforms methods that have been trained on non-publicly available datasets with 26 times more training data.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Audio-Visual Speech Recognition LRS2 CTC/Attention Test WER 1.5 # 1
Automatic Speech Recognition (ASR) LRS2 CTC/Attention Test WER 1.5 # 1
Lipreading LRS2 CTC/Attention Word Error Rate (WER) 14.6% # 1
Lipreading LRS3-TED CTC/Attention Word Error Rate (WER) 19.1 # 1
Visual Speech Recognition LRS3-TED CTC/Attention Word Error Rate (WER) 19.1 # 1
Audio-Visual Speech Recognition LRS3-TED CTC/Attention Word Error Rate (WER) 0.9 # 1
Automatic Speech Recognition (ASR) LRS3-TED CTC/Attention Word Error Rate (WER) 1 # 1

Methods


No methods listed for this paper. Add relevant methods here