Music Modeling
22 papers with code • 2 benchmarks • 6 datasets
( Image credit: R-Transformer )
Libraries
Use these libraries to find Music Modeling models and implementationsLatest papers
Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
In comparison to TCN and Wavenet, our network consistently saves memory and computation time, with speed-ups for training and inference of over 4x in the audio generation experiment in particular, while achieving a comparable performance in all tasks.
R-Transformer: Recurrent Neural Network Enhanced Transformer
Recurrent Neural Networks have long been the dominating choice for sequence modeling.
Bivariate Beta-LSTM
Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0, 1] through a sigmoid function.
Counterpoint by Convolution
Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end.
Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.
Music Transformer
This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length.
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.
Diagonal RNNs in Symbolic Music Modeling
In this paper, we propose a new Recurrent Neural Network (RNN) architecture.
Deep Learning for Music
Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans.
Sequential Neural Models with Stochastic Layers
How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks?