Music Generation

136 papers with code • 0 benchmarks • 26 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Most implemented papers

LakhNES: Improving multi-instrumental music generation with cross-domain pre-training

chrisdonahue/LakhNES 10 Jul 2019

We are interested in the task of generating multi-instrumental music scores.

Neural Shuffle-Exchange Networks -- Sequence Processing in O(n log n) Time

LUMII-Syslab/shuffle-exchange 18 Jul 2019

A key requirement in sequence to sequence processing is the modeling of long range dependencies.

LSTM Based Music Generation System

sanidhyamangal/music_research 2 Aug 2019

A model is designed to execute this algorithm where data is represented with the help of musical instrument digital interface (MIDI) file format for easier access and better understanding.

Improving Automatic Jazz Melody Generation by Transfer Learning Techniques

annahung31/jazz_melody_generation 26 Aug 2019

In this paper, we tackle the problem of transfer learning for Jazz automatic generation.

MIDI-Sandwich2: RNN-based Hierarchical Multi-modal Fusion Generation VAE networks for multi-track symbolic music generation

LiangHsia/MIDI-S2 8 Sep 2019

In view of the above problem, this paper proposes a RNN-based Hierarchical Multi-modal Fusion Generation Variational Autoencoder (VAE) network, MIDI-Sandwich2, for multi-track symbolic music generation.

Midi Miner -- A Python library for tonal tension and track classification

ruiguo-bio/midi-miner 3 Oct 2019

We present a Python library, called Midi Miner, that can calculate tonal tension and classify different tracks.

Neural Shuffle-Exchange Networks - Sequence Processing in O(n log n) Time

LUMII-Syslab/shuffle-exchange NeurIPS 2019

A key requirement in sequence to sequence processing is the modeling of long range dependencies.

Continuous Melody Generation via Disentangled Short-Term Representations and Structural Conditions

RetroCirce/Auto-mask-Music-Generative-Model-via-EC2-VAE-Disentanglement 5 Feb 2020

Automatic music generation is an interdisciplinary research topic that combines computational creativity and semantic analysis of music to create automatic machine improvisations.

Emotional Video to Audio Transformation Using Deep Recurrent Neural Networks and a Neuro-Fuzzy System

gcunhase/Emotional-Video-to-Audio-with-ANFIS-DeepRNN 5 Apr 2020

In this study, we propose a novel hybrid deep neural network that uses an Adaptive Neuro-Fuzzy Inference System to predict a video's emotion from its visual features and a deep Long Short-Term Memory Recurrent Neural Network to generate its corresponding audio signals with similar emotional inkling.

Vector Quantized Contrastive Predictive Coding for Template-based Music Generation

SonyCSLParis/vqcpc-bach 21 Apr 2020

In this work, we propose a flexible method for generating variations of discrete sequences in which tokens can be grouped into basic units, like sentences in a text or bars in music.