A neural attention model for speech command recognition

This paper introduces a convolutional recurrent network with attention for speech command recognition. Attention models are powerful tools to improve performance on natural language, image captioning and speech tasks. The proposed model establishes a new state-of-the-art accuracy of 94.1% on Google Speech Commands dataset V1 and 94.5% on V2 (for the 20-commands recognition task), while still keeping a small footprint of only 202K trainable parameters. Results are compared with previous convolutional implementations on 5 different tasks (20 commands recognition (V1 and V2), 12 commands recognition (V1), 35 word recognition (V1) and left-right (V1)). We show detailed performance results and demonstrate that the proposed attention mechanism not only improves performance but also allows inspecting what regions of the audio were taken into consideration by the network when outputting a given category.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Keyword Spotting Google Speech Commands Attention RNN Google Speech Commands V2 20 94.5 # 3
Google Speech Commands V1 12 95.6 # 12
Google Speech Commands V2 12 96.9 # 13
Google Speech Commands V1 2 99.2 # 1
Google Speech Commands V1 20 94.1 # 1
Google Speech Commands V1 35 94.3 # 1
Google Speech Commands V2 2 99.4 # 1
Google Speech Commands V2 35 93.9 # 12

Methods


No methods listed for this paper. Add relevant methods here