Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping

19 Sep 2023  ·  Subash Khanal, Srikumar Sastry, Aayush Dhakal, Nathan Jacobs ·

We focus on the task of soundscape mapping, which involves predicting the most probable sounds that could be perceived at a particular geographic location. We utilise recent state-of-the-art models to encode geotagged audio, a textual description of the audio, and an overhead image of its capture location using contrastive pre-training. The end result is a shared embedding space for the three modalities, which enables the construction of soundscape maps for any geographic region from textual or audio queries. Using the SoundingEarth dataset, we find that our approach significantly outperforms the existing SOTA, with an improvement of image-to-audio Recall@100 from 0.256 to 0.450. Our code is available at https://github.com/mvrl/geoclap.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Cross-Modal Retrieval on SoundingEarth (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Cross-Modal Retrieval SoundingEarth GeoCLAP Median Rank 159 # 1
Image-to-sound R@100 0.434 # 1
Sound-to-image R@100 0.434 # 1

Methods