Semantic Role Aware Correlation Transformer for Text to Video Retrieval

26 Jun 2022  ·  Burak Satar, Hongyuan Zhu, Xavier Bresson, Joo Hwee Lim ·

With the emergence of social media, voluminous video clips are uploaded every day, and retrieving the most relevant visual content with a language query becomes critical. Most approaches aim to learn a joint embedding space for plain textual and visual contents without adequately exploiting their intra-modality structures and inter-modality correlations. This paper proposes a novel transformer that explicitly disentangles the text and video into semantic roles of objects, spatial contexts and temporal contexts with an attention scheme to learn the intra- and inter-role correlations among the three roles to discover discriminative features for matching at different levels. The preliminary results on popular YouCook2 indicate that our approach surpasses a current state-of-the-art method, with a high margin in all metrics. It also overpasses two SOTA methods in terms of two metrics.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Retrieval YouCook2 Satar et al. text-to-video Median Rank 77 # 10
text-to-video R@1 5.3 # 13
text-to-video R@10 20.8 # 16
text-to-video R@5 14.5 # 12

Methods


No methods listed for this paper. Add relevant methods here