Music Generation
136 papers with code • 0 benchmarks • 26 datasets
Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.
Benchmarks
These leaderboards are used to track progress in Music Generation
Libraries
Use these libraries to find Music Generation models and implementationsDatasets
Most implemented papers
LakhNES: Improving multi-instrumental music generation with cross-domain pre-training
We are interested in the task of generating multi-instrumental music scores.
Neural Shuffle-Exchange Networks -- Sequence Processing in O(n log n) Time
A key requirement in sequence to sequence processing is the modeling of long range dependencies.
LSTM Based Music Generation System
A model is designed to execute this algorithm where data is represented with the help of musical instrument digital interface (MIDI) file format for easier access and better understanding.
Improving Automatic Jazz Melody Generation by Transfer Learning Techniques
In this paper, we tackle the problem of transfer learning for Jazz automatic generation.
MIDI-Sandwich2: RNN-based Hierarchical Multi-modal Fusion Generation VAE networks for multi-track symbolic music generation
In view of the above problem, this paper proposes a RNN-based Hierarchical Multi-modal Fusion Generation Variational Autoencoder (VAE) network, MIDI-Sandwich2, for multi-track symbolic music generation.
Midi Miner -- A Python library for tonal tension and track classification
We present a Python library, called Midi Miner, that can calculate tonal tension and classify different tracks.
Neural Shuffle-Exchange Networks - Sequence Processing in O(n log n) Time
A key requirement in sequence to sequence processing is the modeling of long range dependencies.
Continuous Melody Generation via Disentangled Short-Term Representations and Structural Conditions
Automatic music generation is an interdisciplinary research topic that combines computational creativity and semantic analysis of music to create automatic machine improvisations.
Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro-Fuzzy System
In this study, we propose a novel hybrid deep neural network that uses an Adaptive Neuro-Fuzzy Inference System to predict a video's emotion from its visual features and a deep Long Short-Term Memory Recurrent Neural Network to generate its corresponding audio signals with similar emotional inkling.
Vector Quantized Contrastive Predictive Coding for Template-based Music Generation
In this work, we propose a flexible method for generating variations of discrete sequences in which tokens can be grouped into basic units, like sentences in a text or bars in music.