Search Results for author: Curtis Hawthorne

Found 16 papers, 13 papers with code

Continuous diffusion for categorical data

no code implementations28 Nov 2022 Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, Jonas Adler

Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement.

Language Modelling

The Chamber Ensemble Generator: Limitless High-Quality MIR Data via Generative Modeling

1 code implementation28 Sep 2022 Yusong Wu, Josh Gardner, Ethan Manilow, Ian Simon, Curtis Hawthorne, Jesse Engel

We call this system the Chamber Ensemble Generator (CEG), and use it to generate a large dataset of chorales from four different chamber ensembles (CocoChorales).

Information Retrieval Music Information Retrieval +2

Multi-instrument Music Synthesis with Spectrogram Diffusion

1 code implementation11 Jun 2022 Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, Jesse Engel

An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes.

Generative Adversarial Network Music Generation

Sequence-to-Sequence Piano Transcription with Transformers

2 code implementations19 Jul 2021 Curtis Hawthorne, Ian Simon, Rigel Swavely, Ethan Manilow, Jesse Engel

Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets.

Information Retrieval Music Information Retrieval +2

Symbolic Music Generation with Diffusion Models

1 code implementation30 Mar 2021 Gautam Mittal, Jesse Engel, Curtis Hawthorne, Ian Simon

Score-based generative models and diffusion probabilistic models have been successful at generating high-quality samples in continuous domains such as images and audio.

Music Generation

Improving Perceptual Quality of Drum Transcription with the Expanded Groove MIDI Dataset

1 code implementation1 Apr 2020 Lee Callender, Curtis Hawthorne, Jesse Engel

We introduce the Expanded Groove MIDI dataset (E-GMD), an automatic drum transcription (ADT) dataset that contains 444 hours of audio from 43 drum kits, making it an order of magnitude larger than similar datasets, and the first with human-performed velocity annotations.

Drum Transcription

Encoding Musical Style with Transformer Autoencoders

no code implementations ICML 2020 Kristy Choi, Curtis Hawthorne, Ian Simon, Monica Dinculescu, Jesse Engel

We consider the problem of learning high-level controls over the global structure of generated sequences, particularly in the context of symbolic music generation with complex language models.

Music Generation

The Bach Doodle: Approachable music composition with machine learning at scale

no code implementations14 Jul 2019 Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft

To make music composition more approachable, we designed the first AI-powered Google Doodle, the Bach Doodle, where users can create their own melody and have it harmonized by a machine learning model Coconet (Huang et al., 2017) in the style of Bach.

BIG-bench Machine Learning Quantization

Music Transformer

12 code implementations ICLR 2019 Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, Douglas Eck

This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length.

Music Generation Music Modeling

Learning a Latent Space of Multitrack Measures

1 code implementation1 Jun 2018 Ian Simon, Adam Roberts, Colin Raffel, Jesse Engel, Curtis Hawthorne, Douglas Eck

Discovering and exploring the underlying structure of multi-instrumental music using learning-based approaches remains an open problem.

A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music

7 code implementations ICML 2018 Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, Douglas Eck

The Variational Autoencoder (VAE) has proven to be an effective model for producing semantically meaningful latent representations for natural data.

Onsets and Frames: Dual-Objective Piano Transcription

1 code implementation30 Oct 2017 Curtis Hawthorne, Erich Elsen, Jialin Song, Adam Roberts, Ian Simon, Colin Raffel, Jesse Engel, Sageev Oore, Douglas Eck

We advance the state of the art in polyphonic piano music transcription by using a deep convolutional and recurrent neural network which is trained to jointly predict onsets and frames.

Music Transcription

Cannot find the paper you are looking for? You can Submit a new open access paper.