Multi-Speaker Source Separation

6 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Multi-Speaker Source Separation models and implementations

Most implemented papers

Directional Sparse Filtering using Weighted Lehmer Mean for Blind Separation of Unbalanced Speech Mixtures

karnwatcharasupat/directional-sparse-filtering-tf 30 Jan 2021

In blind source separation of speech signals, the inherent imbalance in the source spectrum poses a challenge for methods that rely on single-source dominance for the estimation of the mixing matrix.

Memory Time Span in LSTMs for Multi-Speaker Source Separation

JeroenZegers/Nabu-MSSS 24 Aug 2018

With deep learning approaches becoming state-of-the-art in many speech (as well as non-speech) related machine learning tasks, efforts are being taken to delve into the neural networks which are often considered as a black box.

Multi-scenario deep learning for multi-speaker source separation

JeroenZegers/Nabu-MSSS 24 Aug 2018

Furthermore, it is concluded that a single model, trained on different scenarios is capable of matching performance of scenario specific models.

Unsupervised Deep Clustering for Source Separation: Direct Learning from Mixtures using Spatial Information

etzinis/unsupervised_spatial_dc 5 Nov 2018

We present a monophonic source separation system that is trained by only observing mixtures with no ground truth separation information.

CNN-LSTM models for Multi-Speaker Source Separation using Bayesian Hyper Parameter Optimization

JeroenZegers/Nabu-MSSS 19 Dec 2019

In this paper we propose a novel network for source separation using an encoder-decoder CNN and LSTM in parallel.