Learning to Groove with Inverse Sequence Transformations

14 May 2019 Jon Gillick Adam Roberts Jesse Engel Douglas Eck David Bamman

We explore models for translating abstract musical ideas (scores, rhythms) into expressive performances using Seq2Seq and recurrent Variational Information Bottleneck (VIB) models. Though Seq2Seq models usually require painstakingly aligned corpora, we show that it is possible to adapt an approach from the Generative Adversarial Network (GAN) literature (e.g. Pix2Pix (Isola et al., 2017) and Vid2Vid (Wang et al. 2018a)) to sequences, creating large volumes of paired data by performing simple transformations and training generative models to plausibly invert these transformations... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Introduced in the Paper:

Groove

Mentioned in the Paper:

MAESTRO

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Concatenated Skip Connection
Skip Connections
PatchGAN
Discriminators
ReLU
Activation Functions
Batch Normalization
Normalization
Convolution
Convolutions
Leaky ReLU
Activation Functions
Dropout
Regularization
Pix2Pix
Generative Models
Sigmoid Activation
Activation Functions
Tanh Activation
Activation Functions
LSTM
Recurrent Neural Networks
Seq2Seq
Sequence To Sequence Models