Search Results for author: Jesús Andrés-Ferrer

Found 5 papers, 0 papers with code

Contextual Density Ratio for Language Model Biasing of Sequence to Sequence ASR Systems

no code implementations29 Jun 2022 Jesús Andrés-Ferrer, Dario Albesano, Puming Zhan, Paul Vozila

In this work, we propose a contextual density ratio approach for both training a contextual aware E2E model and adapting the language model to named entities.

Language Modelling

On the Prediction Network Architecture in RNN-T for ASR

no code implementations29 Jun 2022 Dario Albesano, Jesús Andrés-Ferrer, Nicola Ferri, Puming Zhan

In contrast to some previous works, our results show that Transformer does not always outperform LSTM when used as prediction network along with Conformer encoder.

Conformer with dual-mode chunked attention for joint online and offline ASR

no code implementations22 Jun 2022 Felix Weninger, Marco Gaudesi, Md Akmal Haidar, Nicola Ferri, Jesús Andrés-Ferrer, Puming Zhan

In the dual-mode Conformer Transducer model, layers can function in online or offline mode while sharing parameters, and in-place knowledge distillation from offline to online mode is applied in training to improve online accuracy.

Knowledge Distillation

Semi-Supervised Learning with Data Augmentation for End-to-End ASR

no code implementations27 Jul 2020 Felix Weninger, Franco Mana, Roberto Gemello, Jesús Andrés-Ferrer, Puming Zhan

In the result, the Noisy Student algorithm with soft labels and consistency regularization achieves 10. 4% word error rate (WER) reduction when adding 475h of unlabeled data, corresponding to a recovery rate of 92%.

Data Augmentation Image Classification +1

Listen, Attend, Spell and Adapt: Speaker Adapted Sequence-to-Sequence ASR

no code implementations8 Jul 2019 Felix Weninger, Jesús Andrés-Ferrer, Xinwei Li, Puming Zhan

Sequence-to-sequence (seq2seq) based ASR systems have shown state-of-the-art performances while having clear advantages in terms of simplicity.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.