Music Generation
129 papers with code • 0 benchmarks • 24 datasets
Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.
Benchmarks
These leaderboards are used to track progress in Music Generation
Libraries
Use these libraries to find Music Generation models and implementationsDatasets
Most implemented papers
Convolutional Generative Adversarial Networks with Binary Neurons for Polyphonic Music Generation
Experimental results show that using binary neurons instead of HT or BS indeed leads to better results in a number of objective measures.
MMM : Exploring Conditional Multi-Track Music Generation with the Transformer
We propose the Multi-Track Music Machine (MMM), a generative system based on the Transformer architecture that is capable of generating multi-track music.
BigVGAN: A Universal Neural Vocoder with Large-Scale Training
Despite recent progress in generative adversarial network (GAN)-based vocoders, where the model generates raw waveform conditioned on acoustic features, it is challenging to synthesize high-fidelity audio for numerous speakers across various recording environments.
MusicLM: Generating Music From Text
We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff".
Deep Learning for Music
Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans.
The NES Music Database: A multi-instrumental dataset with expressive performance attributes
Existing research on music generation focuses on composition, but often ignores the expressive performance characteristics required for plausible renditions of resultant pieces.
Lead Sheet Generation and Arrangement by Conditional Generative Adversarial Network
A new recurrent convolutional generative model for the task is proposed, along with three new symbolic-domain harmonic features to facilitate learning from unpaired lead sheets and MIDIs.
Improving Polyphonic Music Models with Feature-Rich Encoding
We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modelled, can improve overall model performance significantly.
Attentional networks for music generation
Realistic music generation has always remained as a challenging problem as it may lack structure or rationality.
Towards democratizing music production with AI-Design of Variational Autoencoder-based Rhythm Generator as a DAW plugin
There has been significant progress in the music generation technique utilizing deep learning.