AVSBench (Audio −Visual Segmentation)

Introduced by Zhou et al. in Audio-Visual Segmentation with Semantics

AVSBench is a pixel-level audio-visual segmentation benchmark that provides ground truth labels for sounding objects. The dataset is divided into three subsets: AVSBench-object (Single-source subset, Multi-sources subset) and AVSBench-semantic (Semantic-labels subset). Accordingly, three settings are studied:

1) semi-supervised audio-visual segmentation with a single sound source

2) fully-supervised audio-visual segmentation with multiple sound sources

3) fully-supervised audio-visual semantic segmentation

Source: Audio-Visual Segmentation with Semantics

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


Modalities


Languages