Search Results for author: Arthur Pimentel

Found 7 papers, 1 papers with code

On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''

no code implementations25 Sep 2023 Arthur Pimentel, Heitor Guimarães, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Recent advances with self-supervised learning have allowed speech recognition systems to achieve state-of-the-art (SOTA) word error rates (WER) while requiring only a fraction of the labeled training data needed by its predecessors.

Data Augmentation Model Compression +4

RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness

no code implementations18 Feb 2023 Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk

The proposed layer-wise distillation recipe is evaluated on top of three well-established universal representations, as well as with three downstream tasks.

Knowledge Distillation Multi-Task Learning

Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement

no code implementations12 Nov 2022 Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Self-supervised speech representation learning aims to extract meaningful factors from the speech signal that can later be used across different downstream tasks, such as speech and/or emotion recognition.

Data Augmentation Emotion Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.