no code implementations • 4 Aug 2020 • Antoine Caillon, Adrien Bitton, Brice Gatinet, Philippe Esling
Recent studies show the ability of unsupervised models to learn invertible audio representations using Auto-Encoders.
no code implementations • 4 Aug 2020 • Adrien Bitton, Philippe Esling, Tatsuya Harada
In this setting the learned grain space is invertible, meaning that we can continuously synthesize sound when traversing its dimensions.
1 code implementation • 31 Jul 2020 • Philippe Esling, Theis Bazin, Adrien Bitton, Tristan Carsault, Ninon Devis
We show that our proposal can remove up to 90% of the model parameters without loss of accuracy, leading to ultra-light deep MIR models.
1 code implementation • 31 Jul 2020 • Philippe Esling, Ninon Devis, Adrien Bitton, Antoine Caillon, Axel Chemla--Romeu-Santos, Constance Douwes
This hypothesis states that extremely efficient small sub-networks exist in deep models and would provide higher accuracy than larger models if trained in isolation.
1 code implementation • 13 Jul 2020 • Adrien Bitton, Philippe Esling, Tatsuya Harada
Although its definition is usually elusive, it can be seen from a signal processing viewpoint as all the spectral features that are perceived independently from pitch and loudness.
3 code implementations • 12 Apr 2019 • Adrien Bitton, Philippe Esling, Antoine Caillon, Martin Fouilleul
Its training data subsets can directly be visualized in the 3D latent representation.
no code implementations • ICLR 2019 • Adrien Bitton, Philippe Esling, Axel Chemla--Romeu-Santos
We define timbre transfer as applying parts of the auditory properties of a musical instrument onto another.
Sound Audio and Speech Processing
1 code implementation • Conference 2018 • Philippe Esling, Axel Chemla--Romeu-Santos, Adrien Bitton
Based on this, we introduce a method for descriptor-based synthesis and show that we can control the descriptors of an instrument while keeping its timbre structure.
Sound Audio and Speech Processing