Zero-Shot Audio Captioning via Audibility Guidance

7 Sep 2023  ·  Tal Shaharabany, Ariel Shaulov, Lior Wolf ·

The task of audio captioning is similar in essence to tasks such as image and video captioning. However, it has received much less attention. We propose three desiderata for captioning audio -- (i) fluency of the generated text, (ii) faithfulness of the generated text to the input audio, and the somewhat related (iii) audibility, which is the quality of being able to be perceived based only on audio. Our method is a zero-shot method, i.e., we do not learn to perform captioning. Instead, captioning occurs as an inference process that involves three networks that correspond to the three desired qualities: (i) A Large Language Model, in our case, for reasons of convenience, GPT-2, (ii) A model that provides a matching score between an audio file and a text, for which we use a multimodal matching network called ImageBind, and (iii) A text classifier, trained using a dataset we collected automatically by instructing GPT-4 with prompts designed to direct the generation of both audible and inaudible sentences. We present our results on the AudioCap dataset, demonstrating that audibility guidance significantly enhances performance compared to the baseline, which lacks this objective.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-shot Audio Captioning AudioCaps Shaharabany et al. BLEU-4 9.8 # 1
METEOR 8.6 # 2
ROUGE-L 8.2 # 3
CIDEr 9.2 # 2

Methods