Music Generation

131 papers with code • 0 benchmarks • 24 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Latest papers with no code

Structure-informed Positional Encoding for Music Generation

no code yet • 20 Feb 2024

Music generated by deep learning methods often suffers from a lack of coherence and long-term organization.

An Order-Complexity Aesthetic Assessment Model for Aesthetic-aware Music Recommendation

no code yet • 13 Feb 2024

In order to improve the quality of AI music generation and further guide computer music production, synthesis, recommendation and other tasks, we use Birkhoff's aesthetic measure to design a aesthetic model, objectively measuring the aesthetic beauty of music, and form a recommendation list according to the aesthetic feeling of music.

MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models

no code yet • 9 Feb 2024

This paper introduces a novel approach to the editing of music generated by such models, enabling the modification of specific attributes, such as genre, mood and instrument, while maintaining other aspects unchanged.

MusicRL: Aligning Music Generation to Human Preferences

no code yet • 6 Feb 2024

MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards.

DITTO: Diffusion Inference-Time T-Optimization for Music Generation

no code yet • 22 Jan 2024

We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose frame-work for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents.

Multi-view MidiVAE: Fusing Track- and Bar-view Representations for Long Multi-track Symbolic Music Generation

no code yet • 15 Jan 2024

Variational Autoencoders (VAEs) constitute a crucial component of neural symbolic music generation, among which some works have yielded outstanding results and attracted considerable attention.

MCMChaos: Improvising Rap Music with MCMC Methods and Chaos Theory

no code yet • 15 Jan 2024

In each version, values simulated from each respective mathematical model alter the rate of speech, volume, and (in the multiple voice case) the voice of the text-to-speech engine on a line-by-line basis.

StemGen: A music generation model that listens

no code yet • 14 Dec 2023

End-to-end generation of musical audio using deep learning techniques has seen an explosion of activity recently.

Computational Copyright: Towards A Royalty Model for Music Generative AI

no code yet • 11 Dec 2023

Our methodology involves a detailed analysis of existing royalty models in platforms like Spotify and YouTube, and adapting these to the unique context of AI-generated music.

Automatic Time Signature Determination for New Scores Using Lyrics for Latent Rhythmic Structure

no code yet • 27 Nov 2023

In this paper, we propose a novel approach that only uses lyrics as input to automatically generate a fitting time signature for lyrical songs and uncover the latent rhythmic structure utilizing explainable machine learning models.