Channel-Attention Dense U-Net for Multichannel Speech Enhancement

30 Jan 2020  ·  Bahareh Tolooshams, Ritwik Giri, Andrew H. Song, Umut Isik, Arvindh Krishnaswamy ·

Supervised deep learning has gained significant attention for speech enhancement recently. The state-of-the-art deep learning methods perform the task by learning a ratio/binary mask that is applied to the mixture in the time-frequency domain to produce the clean speech. Despite the great performance in the single-channel setting, these frameworks lag in performance in the multichannel setting as the majority of these methods a) fail to exploit the available spatial information fully, and b) still treat the deep architecture as a black box which may not be well-suited for multichannel audio processing. This paper addresses these drawbacks, a) by utilizing complex ratio masking instead of masking on the magnitude of the spectrogram, and more importantly, b) by introducing a channel-attention mechanism inside the deep architecture to mimic beamforming. We propose Channel-Attention Dense U-Net, in which we apply the channel-attention unit recursively on feature maps at every layer of the network, enabling the network to perform non-linear beamforming. We demonstrate the superior performance of the network against the state-of-the-art approaches on the CHiME-3 dataset.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Enhancement CHiME-3 CA Dense U-Net (Complex) SDR 18.635 # 2
PESQ 2.436 # 2
ΔPESQ 1.16 # 1
Speech Enhancement CHiME-3 Noisy/unprocessed SDR 6.50 # 6
PESQ 1.27 # 4
Speech Enhancement CHiME-3 Dense U-Net (Complex) SDR 18.402 # 3
Speech Enhancement CHiME-3 Dense U-Net (Real) SDR 16.855 # 4
Speech Enhancement CHiME-3 U-Net (Real) SDR 15.967 # 5
PESQ 2.176 # 3

Methods