An Integrated System Architecture for Generative Audio Modeling

29 Sep 2021  ·  Lonce Wyse, Purnima Kamath, Chitralekha Gupta ·

We introduce a new system for data-driven audio sound model design built around two different neural network architectures, a Generative Adversarial Network(GAN) and a Recurrent Neural Network (RNN), that takes advantage of the unique characteristics of each to achieve the system objectives that neither is capable of addressing alone. The objective of the system is to generate interactively controllable sound models given (a) a range of sounds the model should be able to synthesize, and (b) a specification of the parametric controls for navigating that space of sounds. The range of sounds is defined by a dataset provided by the designer, while the means of navigation is defined by a combination of data labels and the selection of a sub-manifold from the latent space learned by the GAN. Our proposed system takes advantage of the rich latent space of GAN that consists of sounds that fill out the spaces “between” real data-like sounds. This augmented data from GAN is then used to train an RNN, that has the capability of immediate parameter response, and generation of audio over unlimited periods of time. Furthermore, we develop a self organizing map technique for ”smoothing” the latent space of GAN that results in perceptually smooth interpolation between audio timbres. We validate this process through user studies. Our system contributes advances to the state of the art for generative sound model design that include system configuration and components for improving interpolation and the expansion of audio modeling capabilities beyond musical pitch and percussive instrument sounds into the more complex space of audio textures.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here