Hierarchic Temporal Convolutional Network With Cross-Domain Encoder for Music Source Separation

Recently, the time-domain-based methods (i.e., the method of modeling the raw waveform directly) for audio source separation have shown tremendous potential. In this paper, we propose a model which combines the complexed spectrogram domain feature and time-domain feature by a cross-domain encoder (CDE) and adopts the hierarchic temporal convolutional network (HTCN) for multiple music sources separation. The CDE is designed to enable the network to code the interactive information of the time-domain and complexed spectrogram domain features. HTCN enables it to learn the long-time series dependence effectively. We also designed a feature calibration unit (FCU) to be applied in the HTCN and adopted the multi-stage training strategy during the training stage. The ablation study demonstrates the effectiveness of each designed component in the model. We conducted the experiments on the MUSDB18 dataset. The experimental results indicate that our proposed CDE-HTCN model outperforms the top-of-the-line methods and, compared with the state-of-the-art method, DEMUCS, achieves the improvement of the average SDR score of 0.61 dB. Significantly, the improvement of the SDR score for the bass source has a sizable margin of 0.91 dB.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Music Source Separation MUSDB18 CDE-HTCN SDR (vocals) 7.37 # 11
SDR (drums) 7.33 # 10
SDR (other) 4.92 # 11
SDR (bass) 7.92 # 6
SDR (avg) 6.89 # 8

Methods