Exploring the Best Loss Function for DNN-Based Low-latency Speech Enhancement with Temporal Convolutional Networks

Interspeech 2020  ·  Koyama Yuichiro, Vuong Tyler, Uhlich Stefan, Raj Bhiksha ·

Recently, deep neural networks (DNNs) have been successfully used for speech enhancement, and DNN-based speech enhancement is becoming an attractive research area. While time-frequency masking based on the short-time Fourier transform (STFT) has been widely used for DNN-based speech enhancement over the last years, time domain methods such as the time-domain audio separation network (TasNet) have also been proposed. The most suitable method depends on the scale of the dataset and the type of task. In this paper, we explore the best speech enhancement algorithm on two different datasets. We propose a STFT-based method and a loss function using problem-agnostic speech encoder (PASE) features to improve subjective quality for the smaller dataset. Our proposed methods are effective on the Voice Bank + DEMAND dataset and compare favorably to other state-of-the-art methods. We also implement a low-latency version of TasNet, which we submitted to the DNS Challenge and made public by open-sourcing it. Our model achieves excellent performance on the DNS Challenge dataset.

PDF Abstract Interspeech 2020 PDF Interspeech 2020 Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Speech Dereverberation Deep Noise Suppression (DNS) Challenge Noisy/unprocessed PESQ 1.82 # 2
Speech Enhancement Deep Noise Suppression (DNS) Challenge Conv-TasNet-SNR PESQ-WB 2.73 # 11
Speech Dereverberation Deep Noise Suppression (DNS) Challenge Conv-TasNet-SNR PESQ 2.75 # 1
ΔPESQ 0.93 # 1

Methods


No methods listed for this paper. Add relevant methods here