Search Results for author: Steven Vander Eeckt

Found 4 papers, 3 papers with code

Rehearsal-Free Online Continual Learning for Automatic Speech Recognition

1 code implementation19 Jun 2023 Steven Vander Eeckt, Hugo Van hamme

Fine-tuning an Automatic Speech Recognition (ASR) model to new domains results in degradation on original domains, referred to as Catastrophic Forgetting (CF).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Weight Averaging: A Simple Yet Effective Method to Overcome Catastrophic Forgetting in Automatic Speech Recognition

no code implementations27 Oct 2022 Steven Vander Eeckt, Hugo Van hamme

Adapting a trained Automatic Speech Recognition (ASR) model to new tasks results in catastrophic forgetting of old tasks, limiting the model's ability to learn continually and to be extended to new speakers, dialects, languages, etc.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Using Adapters to Overcome Catastrophic Forgetting in End-to-End Automatic Speech Recognition

1 code implementation30 Mar 2022 Steven Vander Eeckt, Hugo Van hamme

In this paper, we aim to overcome CF for E2E ASR by inserting adapters, small architectures of few parameters which allow a general model to be fine-tuned to a specific task, into our model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Continual Learning for Monolingual End-to-End Automatic Speech Recognition

1 code implementation17 Dec 2021 Steven Vander Eeckt, Hugo Van hamme

Adapting Automatic Speech Recognition (ASR) models to new domains results in a deterioration of performance on the original domain(s), a phenomenon called Catastrophic Forgetting (CF).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.