no code implementations • 3 Aug 2020 • Pooyan Safari, Miquel India, Javier Hernando
On the other hand, self-attention networks based on Transformer architecture have attracted remarkable interests due to their high parallelization capabilities and strong performance on a variety of Natural Language Processing (NLP) applications.
1 code implementation • 26 Jul 2020 • Miquel India, Pooyan Safari, Javier Hernando
In this paper we present Double Multi-Head Attention pooling, which extends our previous approach based on Self Multi-Head Attention.
Audio and Speech Processing Sound
no code implementations • 24 Jun 2019 • Miquel India, Pooyan Safari, Javier Hernando
Most state-of-the-art Deep Learning (DL) approaches for speaker recognition work on a short utterance level.