no code implementations • 3 Aug 2020 • Pooyan Safari, Miquel India, Javier Hernando
On the other hand, self-attention networks based on Transformer architecture have attracted remarkable interests due to their high parallelization capabilities and strong performance on a variety of Natural Language Processing (NLP) applications.
1 code implementation • 26 Jul 2020 • Miquel India, Pooyan Safari, Javier Hernando
In this paper we present Double Multi-Head Attention pooling, which extends our previous approach based on Self Multi-Head Attention.
Audio and Speech Processing Sound
no code implementations • 9 Jun 2020 • Michał Krzemiński, Javier Hernando
In the proposed system data from the touchscreen goes directly, without any processing, to the input of a deep neural network, which is able to decide on the identity of the user.
no code implementations • 24 Jun 2019 • Miquel India, Pooyan Safari, Javier Hernando
Most state-of-the-art Deep Learning (DL) approaches for speaker recognition work on a short utterance level.
no code implementations • 8 Dec 2015 • Omid Ghahabi, Javier Hernando
Given i-vectors as inputs, the authors proposed an impostor selection algorithm and a universal model adaptation process in a hybrid system based on Deep Belief Networks (DBN) and Deep Neural Networks (DNN) to discriminatively model each target speaker.