Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention Networks

23 Oct 2019 Xingchen Song Guangsen Wang Zhiyong Wu Yiheng Huang Dan Su Dong Yu Helen Meng

Self-attention network (SAN) can benefit significantly from the bi-directional representation learning through unsupervised pretraining paradigms such as BERT and XLNet. In this paper, we present an XLNet-like pretraining scheme "Speech-XLNet" for unsupervised acoustic model pretraining to learn speech representations with SAN... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper