no code implementations • 10 Jul 2019 • Ruisen Luo, Tianran Sun, Chen Wang, Miao Du, Zuodong Tang, Kai Zhou, Xiao-Feng Gong, Xiaomei Yang
The key idea is that, in addition to the conventional attention mechanism, information of layers prior to feature extraction and LSTM are introduced into attention weights calculations.
Ranked #4 on Keyword Spotting on Google Speech Commands (Google Speech Commands V2 20 metric)