Search Results for author: Dongseong Hwang

Found 14 papers, 0 papers with code

TransformerFAM: Feedback attention is working memory

no code implementations14 Apr 2024 Dongseong Hwang, Weiran Wang, Zhuoyuan Huo, Khe Chai Sim, Pedro Moreno Mengibar

While Transformers have revolutionized deep learning, their quadratic attention complexity hinders their ability to process infinitely long inputs.

Revisiting the Entropy Semiring for Neural Speech Recognition

no code implementations13 Dec 2023 Oscar Chang, Dongseong Hwang, Olivier Siohan

In this work, we revisit the entropy semiring for neural speech recognition models, and show how alignment entropy can be used to supervise models through regularization or distillation.

speech-recognition Speech Recognition

Massive End-to-end Models for Short Search Queries

no code implementations22 Sep 2023 Weiran Wang, Rohit Prabhavalkar, Dongseong Hwang, Qiujia Li, Khe Chai Sim, Bo Li, James Qin, Xingyu Cai, Adam Stooke, Zhong Meng, CJ Zheng, Yanzhang He, Tara Sainath, Pedro Moreno Mengibar

In this work, we investigate two popular end-to-end automatic speech recognition (ASR) models, namely Connectionist Temporal Classification (CTC) and RNN-Transducer (RNN-T), for offline recognition of voice search queries, with up to 2B model parameters.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Edit Distance based RL for RNNT decoding

no code implementations31 May 2023 Dongseong Hwang, Changwan Ryu, Khe Chai Sim

RNN-T is currently considered the industry standard in ASR due to its exceptional WERs in various benchmark tests and its ability to support seamless streaming and longform transcription.

Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion

no code implementations4 Nov 2022 Zhouyuan Huo, Khe Chai Sim, Bo Li, Dongseong Hwang, Tara N. Sainath, Trevor Strohman

Experimental results show that the proposed method can achieve better performance on speech recognition task than existing algorithms with fewer number of trainable parameters, less computational memory cost and faster training speed.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Comparison of Soft and Hard Target RNN-T Distillation for Large-scale ASR

no code implementations11 Oct 2022 Dongseong Hwang, Khe Chai Sim, Yu Zhang, Trevor Strohman

Knowledge distillation is an effective machine learning technique to transfer knowledge from a teacher model to a smaller student model, especially with unlabeled data.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Pseudo Label Is Better Than Human Label

no code implementations22 Mar 2022 Dongseong Hwang, Khe Chai Sim, Zhouyuan Huo, Trevor Strohman

State-of-the-art automatic speech recognition (ASR) systems are trained with tens of thousands of hours of labeled speech data.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Large-scale ASR Domain Adaptation using Self- and Semi-supervised Learning

no code implementations1 Oct 2021 Dongseong Hwang, Ananya Misra, Zhouyuan Huo, Nikhil Siddhartha, Shefali Garg, David Qiu, Khe Chai Sim, Trevor Strohman, Françoise Beaufays, Yanzhang He

Self- and semi-supervised learning methods have been actively investigated to reduce labeled training data or enhance the model performance.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.