Music Modeling
22 papers with code • 2 benchmarks • 6 datasets
( Image credit: R-Transformer )
Libraries
Use these libraries to find Music Modeling models and implementationsLatest papers
Impact of time and note duration tokenizations on deep learning symbolic music modeling
Symbolic music is widely used in various deep learning tasks, including generation, transcription, synthesis, and Music Information Retrieval (MIR).
A Domain-Knowledge-Inspired Music Embedding Space and a Novel Attention Mechanism for Symbolic Music Modeling
These important relative attributes, however, are mostly ignored in existing symbolic music modeling methods with the main reason being the lack of a musically-meaningful embedding space where both the absolute and relative embeddings of the symbolic music tokens can be efficiently represented.
Low-Rank Constraints for Fast Inference in Structured Models
This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Gates Are Not What You Need in RNNs
In this paper, we propose a new recurrent cell called Residual Recurrent Unit (RRU) which beats traditional cells and does not employ a single gate.
Rethinking Neural Operations for Diverse Tasks
An important goal of AutoML is to automate-away the design of neural networks on new tasks in under-explored domains.
PopMAG: Pop Music Accompaniment Generation
To improve harmony, in this paper, we propose a novel MUlti-track MIDI representation (MuMIDI), which enables simultaneous multi-track generation in a single sequence and explicitly models the dependency of the notes from different tracks.
Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions
In contrast with this general approach, this paper shows that Transformers can do even better for music modeling, when we improve the way a musical score is converted into the data fed to a Transformer model.
Learning Style-Aware Symbolic Music Representations by Adversarial Autoencoders
Through the paper, we show how Gaussian mixtures taking into account music metadata information can be used as an effective prior for the autoencoder latent space, introducing the first Music Adversarial Autoencoder (MusAE).
Improving Polyphonic Music Models with Feature-Rich Encoding
We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modelled, can improve overall model performance significantly.
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
We propose a new STAckable Recurrent cell (STAR) for recurrent neural networks (RNNs), which has fewer parameters than widely used LSTM and GRU while being more robust against vanishing or exploding gradients.