no code implementations • 27 Mar 2024 • Rricha Jalota, Lyan Verwimp, Markus Nussbaum-Thom, Amr Mousa, Arturo Argueta, Youssef Oualil
Based on this insight and leveraging the design of our production models, we introduce a new architecture for World English NNLM that meets the accuracy, latency, and memory constraints of our single-dialect models.
no code implementations • 5 Oct 2023 • Leonardo Emili, Thiago Fraga-Silva, Ernest Pusateri, Markus Nußbaum-Thom, Youssef Oualil
We study model pruning methods applied to Transformer-based neural network language models for automatic speech recognition.
no code implementations • 16 May 2023 • Markus Nußbaum-Thom, Lyan Verwimp, Youssef Oualil
On-device automatic speech recognition systems face several challenges compared to server-based systems.
no code implementations • 29 Jun 2022 • Christophe Van Gysel, Mirko Hannemann, Ernest Pusateri, Youssef Oualil, Ilya Oparin
Virtual assistants make use of automatic speech recognition (ASR) to help users answer entity-centric queries.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 26 Aug 2019 • Ernest Pusateri, Christophe Van Gysel, Rami Botros, Sameer Badaskar, Mirko Hannemann, Youssef Oualil, Ilya Oparin
In this work, we uncover a theoretical connection between two language model interpolation techniques, count merging and Bayesian interpolation.
no code implementations • LREC 2018 • Volha Petukhova, Andrei Malchanau, Youssef Oualil, Dietrich Klakow, Saturnino Luz, Fasih Haider, Nick Campbell, Dimitris Koryzis, Dimitris Spiliotopoulos, Pierre Albert, Nicklas Linz, Alex, Jan ersson
no code implementations • 23 Aug 2017 • Youssef Oualil, Dietrich Klakow
The performance of Neural Network (NN)-based language models is steadily improving due to the emergence of new architectures, which are able to learn different natural language characteristics.
no code implementations • EMNLP 2016 • Youssef Oualil, Mittul Singh, Clayton Greenberg, Dietrich Klakow
The goal of language modeling techniques is to capture the statistical and structural properties of natural languages from training corpora.
1 code implementation • 20 Aug 2017 • Youssef Oualil, Dietrich Klakow
Training large vocabulary Neural Network Language Models (NNLMs) is a difficult task due to the explicit requirement of the output layer normalization, which typically involves the evaluation of the full softmax function over the complete vocabulary.
no code implementations • 23 Mar 2017 • Youssef Oualil, Clayton Greenberg, Mittul Singh, Dietrich Klakow
Feedforward Neural Network (FNN)-based language models estimate the probability of the next word based on the history of the last N words, whereas Recurrent Neural Networks (RNN) perform the same task based only on the last word and some context information that cycles in the network.
no code implementations • COLING 2016 • Mittul Singh, Clayton Greenberg, Youssef Oualil, Dietrich Klakow
We augmented pre-trained word embeddings with these novel embeddings and evaluated on a rare word similarity task, obtaining up to 3 times improvement in correlation over the original set of embeddings.