AudioCaps: Generating Captions for Audios in The Wild

We explore the problem of Audio Captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We contribute a large-scale dataset of 46K audio clips with human-written text pairs collected via crowdsourcing on the AudioSet dataset. Our thorough empirical studies not only show that our collected captions are indeed faithful to audio inputs but also discover what forms of audio representation and captioning models are effective for the audio captioning. From extensive experiments, we also propose two novel components that help improve audio captioning performance: the top-down multi-scale encoder and aligned semantic attention.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Introduced in the Paper:

AudioCaps

Used in the Paper:

Flickr30k AudioSet MSR-VTT

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Audio captioning AudioCaps TopDown-AlignedAtt (1NN) CIDEr 0.593 # 11
SPIDEr 0.369 # 9
SPICE 0.144 # 9

Methods


No methods listed for this paper. Add relevant methods here