Music Modeling

22 papers with code • 2 benchmarks • 6 datasets

( Image credit: R-Transformer )

Libraries

Use these libraries to find Music Modeling models and implementations

Most implemented papers

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

locuslab/TCN 4 Mar 2018

Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.

LSTM: A Search Space Odyssey

flukeskywalker/highway-networks 13 Mar 2015

Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995.

Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

jych/librnn 11 Dec 2014

In this paper we compare different types of recurrent units in recurrent neural networks (RNNs).

Music Transformer

Natooz/MidiTok ICLR 2019

This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length.

Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions

YatingMusic/remi 1 Feb 2020

In contrast with this general approach, this paper shows that Transformers can do even better for music modeling, when we improve the way a musical score is converted into the data fed to a Transformer model.

Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset

BShakhovsky/PolyphonicPianoTranscription ICLR 2019

Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.

Counterpoint by Convolution

czhuang/coconet 18 Mar 2019

Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end.

Gating Revisited: Deep Multi-layer RNNs That Can Be Trained

0zgur0/STAR_Network 25 Nov 2019

We propose a new STAckable Recurrent cell (STAR) for recurrent neural networks (RNNs), which has fewer parameters than widely used LSTM and GRU while being more robust against vanishing or exploding gradients.

Rethinking Neural Operations for Diverse Tasks

mkhodak/relax NeurIPS 2021

An important goal of AutoML is to automate-away the design of neural networks on new tasks in under-explored domains.

Deep Learning for Music

sarthak15169/Deep-Music 15 Jun 2016

Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans.