Search Results for author: Umberto Cappellazzo

Found 7 papers, 5 papers with code

Evaluating and Improving Continual Learning in Spoken Language Understanding

no code implementations16 Feb 2024 Muqiao Yang, Xiang Li, Umberto Cappellazzo, Shinji Watanabe, Bhiksha Raj

In this work, we propose an evaluation methodology that provides a unified evaluation on stability, plasticity, and generalizability in continual learning.

Continual Learning Spoken Language Understanding

Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters

1 code implementation1 Feb 2024 Umberto Cappellazzo, Daniele Falavigna, Alessio Brutti

It exploits adapters as the experts and, leveraging the recent Soft MoE method, it relies on a soft assignment between the input tokens and experts to keep the computational time limited.

Transfer Learning

Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers

1 code implementation6 Dec 2023 Umberto Cappellazzo, Daniele Falavigna, Alessio Brutti, Mirco Ravanelli

The common modus operandi of fine-tuning large pre-trained Transformer models entails the adaptation of all their parameters (i. e., full fine-tuning).

Audio Classification Few-Shot Learning +1

Continual Contrastive Spoken Language Understanding

no code implementations4 Oct 2023 Umberto Cappellazzo, Enrico Fini, Muqiao Yang, Daniele Falavigna, Alessio Brutti, Bhiksha Raj

In this paper, we investigate the problem of learning sequence-to-sequence models for spoken language understanding in a class-incremental learning (CIL) setting and we propose COCONUT, a CIL method that relies on the combination of experience replay and contrastive learning.

Class Incremental Learning Contrastive Learning +2

Training dynamic models using early exits for automatic speech recognition on resource-constrained devices

1 code implementation18 Sep 2023 George August Wright, Umberto Cappellazzo, Salah Zaiem, Desh Raj, Lucas Ondel Yang, Daniele Falavigna, Mohamed Nabih Ali, Alessio Brutti

In self-attention models for automatic speech recognition (ASR), early-exit architectures enable the development of dynamic models capable of adapting their size and architecture to varying levels of computational resources and ASR performance demands.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Sequence-Level Knowledge Distillation for Class-Incremental End-to-End Spoken Language Understanding

1 code implementation23 May 2023 Umberto Cappellazzo, Muqiao Yang, Daniele Falavigna, Alessio Brutti

The ability to learn new concepts sequentially is a major weakness for modern neural networks, which hinders their use in non-stationary environments.

Continual Learning Knowledge Distillation +1

An Investigation of the Combination of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding

1 code implementation15 Nov 2022 Umberto Cappellazzo, Daniele Falavigna, Alessio Brutti

Continual learning refers to a dynamical framework in which a model receives a stream of non-stationary data over time and must adapt to new data while preserving previously acquired knowledge.

Class Incremental Learning Incremental Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.