no code implementations • RANLP 2019 • Sevinj Yolchuyeva, Géza Németh, Bálint Gyires-Tóth
Self-attention networks (SAN) have shown promising performance in various Natural Language Processing (NLP) scenarios, especially in machine translation.
1 code implementation • arXiv preprint 2019 • Sevinj Yolchuyeva, Géza Németh, Bálint Gyires-Tóth
The transformer network architecture is completely based on attention mechanisms, and it outperforms sequence-to-sequence models in neural machine translation without recurrent and convolutional layers.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3