( Image credit: SigSep
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present and release a new tool for music source separation with pre-trained models called Spleeter. Spleeter was designed with ease of use, separation performance and speed in mind.
Ranked #2 on Music Source Separation on MUSDB18 (using extra training data)
We study the problem of source separation for music using deep learning with four known sources: drums, bass, vocals and other accompaniments.
Ranked #1 on Music Source Separation on MUSDB18 (using extra training data)
The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms.
Ranked #3 on Music Source Separation on MUSDB18
Music source separation is the task of decomposing music into its constitutive components, e. g., yielding separated stems for the vocals, bass, and drums.
Ranked #6 on Music Source Separation on MUSDB18
Models for audio source separation usually operate on the magnitude spectrum, which ignores phase information and makes separation performance dependant on hyper-parameters for the spectral front-end.
Ranked #9 on Music Source Separation on MUSDB18
Most of the currently successful source separation techniques use the magnitude spectrogram as input, and are therefore by default omitting part of the signal: the phase.
Ranked #8 on Music Source Separation on MUSDB18
This paper deals with the problem of audio source separation.
We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models.
Ranked #5 on Music Source Separation on MUSDB18
Based on this idea, we drive the separator towards outputs deemed as realistic by discriminator networks that are trained to tell apart real from separator samples.
We study the problem of semi-supervised singing voice separation, in which the training data contains a set of samples of mixed music (singing and instrumental) and an unmatched set of instrumental music.