Music Generation
129 papers with code • 0 benchmarks • 24 datasets
Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.
Benchmarks
These leaderboards are used to track progress in Music Generation
Libraries
Use these libraries to find Music Generation models and implementationsDatasets
Most implemented papers
MusPy: A Toolkit for Symbolic Music Generation
MusPy provides easy-to-use tools for essential components in a music generation system, including dataset management, data I/O, data preprocessing and model evaluation.
Learning Interpretable Representation for Controllable Polyphonic Music Generation
While deep generative models have become the leading methods for algorithmic composition, it remains a challenging problem to control the generation process because the latent variables of most deep-learning models lack good interpretability.
PIANOTREE VAE: Structured Representation Learning for Polyphonic Music
The dominant approach for music representation learning involves the deep unsupervised model family variational autoencoder (VAE).
Can GAN originate new electronic dance music genres? -- Generating novel rhythm patterns using GAN with Genre Ambiguity Loss
This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms, interesting patterns not found in the training dataset.
FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control
Generating music with deep neural networks has been an area of active research in recent years.
Multitrack Music Transformer
Existing approaches for generating multitrack music with transformer models have been limited in terms of the number of instruments, the length of the music segments and slow inference.
ComMU: Dataset for Combinatorial Music Generation
Commercial adoption of automatic music composition requires the capability of generating diverse and high-quality music suitable for the desired context (e. g., music for romantic movies, action games, restaurants, etc.).
Byte Pair Encoding for Symbolic Music
When used with deep learning, the symbolic music modality is often coupled with language model architectures.
Simple and Controllable Music Generation
We tackle the task of conditional music generation.
JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models
Despite the task's significance, prevailing generative models exhibit limitations in music quality, computational efficiency, and generalization.