Improved training of end-to-end attention models for speech recognition

8 May 2018  ·  Albert Zeyer, Kazuki Irie, Ralf Schlüter, Hermann Ney ·

Sequence-to-sequence attention-based models on subword units allow simple open-vocabulary end-to-end speech recognition. In this work, we show that such models can achieve competitive results on the Switchboard 300h and LibriSpeech 1000h tasks. In particular, we report the state-of-the-art word error rates (WER) of 3.54% on the dev-clean and 3.82% on the test-clean evaluation subsets of LibriSpeech. We introduce a new pretraining scheme by starting with a high time reduction factor and lowering it during training, which is crucial both for convergence and final performance. In some experiments, we also use an auxiliary CTC loss function to help the convergence. In addition, we train long short-term memory (LSTM) language models on subword units. By shallow fusion, we report up to 27% relative improvements in WER over the attention baseline without a language model.

PDF Abstract

Datasets


Results from the Paper


Ranked #43 on Speech Recognition on LibriSpeech test-clean (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Speech Recognition LibriSpeech test-clean Seq-to-seq attention Word Error Rate (WER) 3.82 # 43

Methods