Robust Speech Recognition
22 papers with code • 0 benchmarks • 3 datasets
Benchmarks
These leaderboards are used to track progress in Robust Speech Recognition
Latest papers
Dual-Path Style Learning for End-to-End Noise-Robust Speech Recognition
Then, we propose style learning to map the fused feature close to clean feature, in order to learn latent speech information from the latter, i. e., clean "speech style".
Speech-enhanced and Noise-aware Networks for Robust Speech Recognition
In this paper, a noise-aware training framework based on two cascaded neural structures is proposed to jointly optimize speech enhancement and speech recognition.
Sequential Randomized Smoothing for Adversarially Robust Speech Recognition
We apply adaptive versions of state-of-the-art attacks, such as the Imperceptible ASR attack, to our model, and show that our strongest defense is robust to all attacks that use inaudible noise, and can only be broken with very high distortion.
Interactive Feature Fusion for End-to-End Noise-Robust Speech Recognition
Speech enhancement (SE) aims to suppress the additive noise from a noisy speech signal to improve the speech's perceptual quality and intelligibility.
An Investigation of End-to-End Models for Robust Speech Recognition
A systematic comparison of these two approaches for end-to-end robust ASR has not been attempted before.
Domain Adaptation Using Class Similarity for Robust Speech Recognition
Then, for each class, probabilities of this class are used to compute a mean vector, which we refer to as mean soft labels.
Multi-task self-supervised learning for Robust Speech Recognition
We then propose a revised encoder that better learns short- and long-term speech dynamics with an efficient combination of recurrent and convolutional networks.
Learning Waveform-Based Acoustic Models using Deep Variational Convolutional Neural Networks
We investigate the potential of stochastic neural networks for learning effective waveform-based acoustic models.
Unsupervised Speech Domain Adaptation Based on Disentangled Representation Learning for Robust Speech Recognition
The latent variables allow us to convert the domain of speech according to its context and domain representation.
Scalable Factorized Hierarchical Variational Autoencoder Training
Deep generative models have achieved great success in unsupervised learning with the ability to capture complex nonlinear relationships between latent generating factors and observations.