Jasper: An End-to-End Convolutional Neural Acoustic Model

In this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Recognition Hub5'00 SwitchBoard Jasper DR 10x5 CallHome 16.2 # 3
SwitchBoard 7.8 # 3
Speech Recognition LibriSpeech test-clean Jasper DR 10x5 (+ Time/Freq Masks) Word Error Rate (WER) 2.84 # 37
Speech Recognition LibriSpeech test-clean Jasper DR 10x5 Word Error Rate (WER) 2.95 # 38
Speech Recognition LibriSpeech test-other Jasper DR 10x5 Word Error Rate (WER) 8.79 # 39
Speech Recognition LibriSpeech test-other Jasper DR 10x5 (+ Time/Freq Masks) Word Error Rate (WER) 7.84 # 37
Speech Recognition WSJ eval92 Jasper 10x3 Word Error Rate (WER) 6.9 # 16

Methods