ZSL Video Classification

1 papers with code • 3 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and Language

explainableml/avca-gzsl CVPR 2022

Focusing on the relatively underexplored task of audio-visual zero-shot learning, we propose to learn multi-modal representations from audio-visual data using cross-modal attention and exploit textual label embeddings for transferring knowledge from seen classes to unseen classes.