Search Results for author: Lonce Wyse

Found 9 papers, 0 papers with code

Example-Based Framework for Perceptually Guided Audio Texture Generation

no code implementations23 Aug 2023 Purnima Kamath, Chitralekha Gupta, Lonce Wyse, Suranga Nanayakkara

By using a few synthetic examples to indicate the presence or absence of a semantic attribute, we infer the guidance vectors in the latent space of the StyleGAN to control that attribute during generation.

Attribute Texture Synthesis

Towards Controllable Audio Texture Morphing

no code implementations23 Apr 2023 Chitralekha Gupta, Purnima Kamath, Yize Wei, Zhuoyao Li, Suranga Nanayakkara, Lonce Wyse

In this paper, we propose a data-driven approach to train a Generative Adversarial Network (GAN) conditioned on "soft-labels" distilled from the penultimate layer of an audio classifier trained on a target set of audio texture classes.

Generative Adversarial Network

Parameter Sensitivity of Deep-Feature based Evaluation Metrics for Audio Textures

no code implementations23 Aug 2022 Chitralekha Gupta, Yize Wei, Zequn Gong, Purnima Kamath, Zhuoyao Li, Lonce Wyse

These metrics use deep features that summarize the statistics of any given audio texture, thus being inherently sensitive to variations in the statistical parameters that define an audio texture.

Texture Synthesis

Sound Model Factory: An Integrated System Architecture for Generative Audio Modelling

no code implementations27 Jun 2022 Lonce Wyse, Purnima Kamath, Chitralekha Gupta

We introduce a new system for data-driven audio sound model design built around two different neural network architectures, a Generative Adversarial Network(GAN) and a Recurrent Neural Network (RNN), that takes advantage of the unique characteristics of each to achieve the system objectives that neither is capable of addressing alone.

Generative Adversarial Network

An Integrated System Architecture for Generative Audio Modeling

no code implementations29 Sep 2021 Lonce Wyse, Purnima Kamath, Chitralekha Gupta

We introduce a new system for data-driven audio sound model design built around two different neural network architectures, a Generative Adversarial Network(GAN) and a Recurrent Neural Network (RNN), that takes advantage of the unique characteristics of each to achieve the system objectives that neither is capable of addressing alone.

Generative Adversarial Network

Signal Representations for Synthesizing Audio Textures with Generative Adversarial Networks

no code implementations12 Mar 2021 Chitralekha Gupta, Purnima Kamath, Lonce Wyse

Generative Adversarial Networks (GANs) currently achieve the state-of-the-art sound synthesis quality for pitched musical instruments using a 2-channel spectrogram representation consisting of log magnitude and instantaneous frequency (the "IFSpectrogram").

Audio Synthesis

Mechanisms of Artistic Creativity in Deep Learning Neural Networks

no code implementations30 Jun 2019 Lonce Wyse

The generative capabilities of deep learning neural networks (DNNs) have been attracting increasing attention for both the remarkable artifacts they produce, but also because of the vast conceptual difference between how they are programmed and what they do.

Conditioning a Recurrent Neural Network to synthesize musical instrument transients

no code implementations26 Mar 2019 Lonce Wyse, Muhammad Huzaifah

A recurrent Neural Network (RNN) is trained to predict sound samples based on audio input augmented by control parameter information for pitch, volume, and instrument identification.

Real-valued parametric conditioning of an RNN for interactive sound synthesis

no code implementations28 May 2018 Lonce Wyse

A Recurrent Neural Network (RNN) for audio synthesis is trained by augmenting the audio input with information about signal characteristics such as pitch, amplitude, and instrument.

Audio Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.