Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning

22 Aug 2023  ยท  Shansong Liu, Atin Sakkeer Hussain, Chenshuo Sun, Ying Shan ยท

Text-to-music generation (T2M-Gen) faces a major obstacle due to the scarcity of large-scale publicly available music datasets with natural language captions. To address this, we propose the Music Understanding LLaMA (MU-LLaMA), capable of answering music-related questions and generating captions for music files. Our model utilizes audio representations from a pretrained MERT model to extract music features. However, obtaining a suitable dataset for training the MU-LLaMA model remains challenging, as existing publicly accessible audio question answering datasets lack the necessary depth for open-ended music question answering. To fill this gap, we present a methodology for generating question-answer pairs from existing audio captioning datasets and introduce the MusicQA Dataset designed for answering open-ended music-related questions. The experiments demonstrate that the proposed MU-LLaMA model, trained on our designed MusicQA dataset, achieves outstanding performance in both music question answering and music caption generation across various metrics, outperforming current state-of-the-art (SOTA) models in both fields and offering a promising advancement in the T2M-Gen research field.

PDF Abstract

Datasets


Introduced in the Paper:

MusicQA

Used in the Paper:

MagnaTagATune MusicCaps

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Music Question Answering MusicQA MU-LLaMA BLEU 0.306 # 1
METEOR 0.385 # 1
ROUGE 0.466 # 1
BERT Score 0.901 # 1

Methods