Music Generation
134 papers with code • 0 benchmarks • 25 datasets
Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.
Benchmarks
These leaderboards are used to track progress in Music Generation
Libraries
Use these libraries to find Music Generation models and implementationsDatasets
Most implemented papers
Style Imitation and Chord Invention in Polyphonic Music with Exponential Families
Modeling polyphonic music is a particularly challenging task because of the intricate interplay between melody and harmony.
Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models
In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics.
Deep Learning Techniques for Music Generation -- A Survey
Examples are: scalar, one-hot or many-hot.
Learning to Fuse Music Genres with Generative Adversarial Dual Learning
FusionGAN is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning.
Semi-Recurrent CNN-based VAE-GAN for Sequential Data Generation
A semi-recurrent hybrid VAE-GAN model for generating sequential data is introduced.
SING: Symbol-to-Instrument Neural Generator
On the generalization task of synthesizing notes for pairs of pitch and instrument not seen during training, SING produces audio with significantly improved perceptual quality compared to a state-of-the-art autoencoder based on WaveNet as measured by a Mean Opinion Score (MOS), and is about 32 times faster for training and 2, 500 times faster for inference.
The Effect of Explicit Structure Encoding of Deep Neural Networks for Symbolic Music Generation
With recent breakthroughs in artificial neural networks, deep generative models have become one of the leading techniques for computational creativity.
Latent Normalizing Flows for Discrete Sequences
Normalizing flows are a powerful class of generative models for continuous random variables, showing both strong model flexibility and the potential for non-autoregressive generation.
LakhNES: Improving multi-instrumental music generation with cross-domain pre-training
We are interested in the task of generating multi-instrumental music scores.
Neural Shuffle-Exchange Networks -- Sequence Processing in O(n log n) Time
A key requirement in sequence to sequence processing is the modeling of long range dependencies.