Robust Speech Recognition via Large-Scale Weak Supervision

We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.

PDF Abstract Preprint 2022 PDF Preprint 2022 Abstract

Results from the Paper


 Ranked #1 on Speech Recognition on Common Voice Italian (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speech Recognition Common Voice English Whisper (Large v2) Word Error Rate (WER) 9.4% # 2
Speech Recognition Common Voice French Whisper (Large v2) Test WER 13.9% # 8
Speech Recognition Common Voice German Whisper (Large v2) Test WER 6.4% # 7
Speech Recognition Common Voice Italian Whisper (Large v2) Test WER 7.1% # 1
Speech Recognition Common Voice Spanish Whisper (Large v2) Test WER 5.6% # 2

Methods


No methods listed for this paper. Add relevant methods here