Improved Noisy Student Training for Automatic Speech Recognition

19 May 2020  ·  Daniel S. Park, Yu Zhang, Ye Jia, Wei Han, Chung-Cheng Chiu, Bo Li, Yonghui Wu, Quoc V. Le ·

Recently, a semi-supervised learning method known as "noisy student training" has been shown to improve image classification performance of deep networks significantly. Noisy student training is an iterative self-training method that leverages augmentation to improve network performance. In this work, we adapt and improve noisy student training for automatic speech recognition, employing (adaptive) SpecAugment as the augmentation method. We find effective methods to filter, balance and augment the data generated in between self-training iterations. By doing so, we are able to obtain word error rates (WERs) 4.2%/8.6% on the clean/noisy LibriSpeech test sets by only using the clean 100h subset of LibriSpeech as the supervised set and the rest (860h) as the unlabeled set. Furthermore, we are able to achieve WERs 1.7%/3.4% on the clean/noisy LibriSpeech test sets by using the unlab-60k subset of LibriLight as the unlabeled set for LibriSpeech 960h. We are thus able to improve upon the previous state-of-the-art clean/noisy test WERs achieved on LibriSpeech 100h (4.74%/12.20%) and LibriSpeech (1.9%/4.1%).

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Recognition LibriSpeech test-clean ContextNet + SpecAugment-based Noisy Student Training with Libri-Light Word Error Rate (WER) 1.7 # 5
Speech Recognition LibriSpeech test-other ContextNet + SpecAugment-based Noisy Student Training with Libri-Light Word Error Rate (WER) 3.4 # 8

Methods