Deformable Temporal Convolutional Networks for Monaural Noisy Reverberant Speech Separation

27 Oct 2022  ·  William Ravenscroft, Stefan Goetze, Thomas Hain ·

Speech separation models are used for isolating individual speakers in many speech processing applications. Deep learning models have been shown to lead to state-of-the-art (SOTA) results on a number of speech separation benchmarks. One such class of models known as temporal convolutional networks (TCNs) has shown promising results for speech separation tasks. A limitation of these models is that they have a fixed receptive field (RF). Recent research in speech dereverberation has shown that the optimal RF of a TCN varies with the reverberation characteristics of the speech signal. In this work deformable convolution is proposed as a solution to allow TCN models to have dynamic RFs that can adapt to various reverberation times for reverberant speech separation. The proposed models are capable of achieving an 11.1 dB average scale-invariant signalto-distortion ratio (SISDR) improvement over the input signal on the WHAMR benchmark. A relatively small deformable TCN model of 1.3M parameters is proposed which gives comparable separation performance to larger and more computationally complex models.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Separation WHAMR! Deformable TCN + Dynamic Mixing SI-SDRi 11.1 # 12
SDRi 10.3 # 1
Number of parameters (M) 3.6 # 2
MACs (G) 3.7 # 1
Speech Separation WHAMR! Deformable TCN + Shared Weights + Dynamic Mixing SI-SDRi 10.1 # 14
SDRi 9.5 # 2
Number of parameters (M) 1.3 # 1
MACs (G) 3.7 # 1
Speech Separation WSJ0-2mix Deformable TCN + Shared Weights + Dynamic Mixing SI-SDRi 16.1 # 25
SDRi 16.3 # 5
Number of parameters (M) 1.3 # 1
MACs (G) 3.7 # 1
Speech Separation WSJ0-2mix Deformable TCN + Dynamic Mixing SI-SDRi 17.2 # 23
SDRi 17.4 # 4
Number of parameters (M) 3.6 # 2
MACs (G) 3.7 # 1

Methods