Text-to-Music Generation

13 papers with code • 2 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

MusicLM: Generating Music From Text

facebookresearch/audiocraft 26 Jan 2023

We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff".

JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models

0417keito/JEN-1-pytorch 9 Aug 2023

Despite the task's significance, prevailing generative models exhibit limitations in music quality, computational efficiency, and generalization.

Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning

crypto-code/mu-llama 22 Aug 2023

To fill this gap, we present a methodology for generating question-answer pairs from existing audio captioning datasets and introduce the MusicQA Dataset designed for answering open-ended music-related questions.

Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task

sander-wood/text-to-music 21 Nov 2022

Benefiting from large-scale datasets and pre-trained models, the field of generative models has recently gained significant momentum.

Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion

archinetai/audio-diffusion-pytorch 27 Jan 2023

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.

MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies

retrocirce/musicldm 3 Aug 2023

Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.

AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining

haoheliu/AudioLDM2 10 Aug 2023

Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model.

Investigating Personalization Methods in Text to Music Generation

zelaki/DreamSound 20 Sep 2023

In this work, we investigate the personalization of text-to-music diffusion models in a few-shot setting.

Music ControlNet: A model similar to SD ControlNetD that can accurately control music generation

johndpope/MusicControlNet . 2023

While the image-domain Uni-ControlNet method already allows generation with any subset of controls, we devise a new strategy to allow creators to input controls that are only partially specified in time.