Target Speech Extraction Based on Blind Source Separation and X-vector-based Speaker Selection Trained with Data Augmentation

16 May 2020  ·  Zhaoyi Gu, Lele Liao, Kai Chen, Jing Lu ·

Extracting the desired speech from a mixture is a meaningful and challenging task. The end-to-end DNN-based methods, though attractive, face the problem of generalization. In this paper, we explore a sequential approach for target speech extraction by combining blind source separation (BSS) with the x-vector based speaker recognition (SR) module. Two promising BSS methods based on source independence assumption, independent low-rank matrix analysis (ILRMA) and multi-channel variational autoencoder (MVAE), are utilized and compared. ILRMA employs nonnegative matrix factorization (NMF) to capture spectral structures of source signals and MVAE utilizes the strong modeling power of deep neural networks (DNN). However, the investigation of MVAE has been limited to the training with very few speakers and the speech signals of test speakers are usually included. We extend the training of MVAE using clean speech signals of 500 speakers to evaluate its generalization to unseen speakers. To improve the correct extraction rate, two data augmentation strategies are implemented to train the SR module. The performance of the proposed cascaded approach is investigated with test data constructed with real room impulse responses under varied environments.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods