Low Latency End-to-End Streaming Speech Recognition with a Scout Network

23 Mar 2020 Chengyi Wang Yu Wu Shujie Liu Jinyu Li Liang Lu Guoli Ye Ming Zhou

The attention-based Transformer model has achieved promising results for speech recognition (SR) in the offline mode. However, in the streaming mode, the Transformer model usually incurs significant latency to maintain its recognition accuracy when applying a fixed-length look-ahead window in each encoder layer... (read more)

PDF Abstract
No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper