Prompting Visual-Language Models for Efficient Video Understanding

8 Dec 2021  ·  Chen Ju, Tengda Han, Kunhao Zheng, Ya zhang, Weidi Xie ·

Image-based visual-language (I-VL) pre-training has shown great success for learning joint visual-textual representations from large-scale web data, revealing remarkable ability for zero-shot generalisation. This paper presents a simple but strong baseline to efficiently adapt the pre-trained I-VL model, and exploit its powerful ability for resource-hungry video understanding tasks, with minimal training. Specifically, we propose to optimise a few random vectors, termed as continuous prompt vectors, that convert video-related tasks into the same format as the pre-training objectives. In addition, to bridge the gap between static images and videos, temporal information is encoded with lightweight Transformers stacking on top of frame-wise visual features. Experimentally, we conduct extensive ablation studies to analyse the critical components. On 10 public benchmarks of action recognition, action localisation, and text-video retrieval, across closed-set, few-shot, and zero-shot scenarios, we achieve competitive or state-of-the-art performance to existing methods, despite optimising significantly fewer parameters.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Action Detection ActivityNet-1.3 EffPrompt ( 75% seen split ) mAP IOU@0.5 37.6 # 5
Zero-Shot Action Detection ActivityNet-1.3 EffPrompt ( 50% seen split ) mAP IOU@0.5 32 # 8

Methods


No methods listed for this paper. Add relevant methods here