QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions

We propose a new end-to-end neural acoustic model for automatic speech recognition. The model is composed of multiple blocks with residual connections between them. Each block consists of one or more modules with 1D time-channel separable convolutional layers, batch normalization, and ReLU layers. It is trained with CTC loss. The proposed network achieves near state-of-the-art accuracy on LibriSpeech and Wall Street Journal, while having fewer parameters than all competing models. We also demonstrate that this model can be effectively fine-tuned on new datasets.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Speech Recognition LibriSpeech test-clean QuartzNet15x5 Word Error Rate (WER) 2.69 # 33
Speech Recognition LibriSpeech test-other QuartzNet15x5 Word Error Rate (WER) 7.25 # 35

Methods


No methods listed for this paper. Add relevant methods here