CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval

18 Apr 2021  ยท  Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, Tianrui Li ยท

Video-text retrieval plays an essential role in multi-modal research and has been widely used in many real-world web applications. The CLIP (Contrastive Language-Image Pre-training), an image-language pre-training model, has demonstrated the power of visual concepts learning from web collected image-text datasets. In this paper, we propose a CLIP4Clip model to transfer the knowledge of the CLIP model to video-language retrieval in an end-to-end manner. Several questions are investigated via empirical studies: 1) Whether image feature is enough for video-text retrieval? 2) How a post-pretraining on a large-scale video-text dataset based on the CLIP affect the performance? 3) What is the practical mechanism to model temporal dependency between video frames? And 4) The Hyper-parameters sensitivity of the model on video-text retrieval task. Extensive experimental results present that the CLIP4Clip model transferred from the CLIP can achieve SOTA results on various video-text retrieval datasets, including MSR-VTT, MSVC, LSMDC, ActivityNet, and DiDeMo. We release our code at https://github.com/ArrowLuo/CLIP4Clip.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Retrieval ActivityNet CLIP4Clip text-to-video R@1 40.5 # 24
text-to-video R@5 73.4 # 18
text-to-video R@50 98.2 # 1
text-to-video Median Rank 2 # 5
text-to-video Mean Rank 7.5 # 11
Video Retrieval DiDeMo CLIP4Clip text-to-video R@1 43.4 # 33
text-to-video R@5 70.2 # 31
text-to-video R@10 80.6 # 29
text-to-video Median Rank 2.0 # 9
text-to-video Mean Rank 17.5 # 12
Zero-Shot Video Retrieval LSMDC CLIP4Clip text-to-video R@1 15.1 # 10
text-to-video R@5 28.5 # 11
text-to-video R@10 36.4 # 11
text-to-video Median Rank 28 # 2
text-to-video Mean Rank 117 # 1
Video Retrieval LSMDC CLIP4Clip text-to-video R@1 21.6 # 24
text-to-video R@5 41.8 # 19
text-to-video R@10 49.8 # 19
text-to-video Mean Rank 58.0 # 9
Video Retrieval MSR-VTT CLIP4Clip-seqTransf text-to-video R@1 44.5 # 13
text-to-video R@5 71.4 # 11
text-to-video R@10 81.6 # 10
Text to Video Retrieval MSR-VTT CLIP4Clip text-to-video R@1 44.5 # 1
Zero-Shot Video Retrieval MSR-VTT CLIP4Clip text-to-video R@1 32.0 # 16
text-to-video R@5 57.0 # 14
text-to-video R@10 66.9 # 13
text-to-video Median Rank 4 # 3
text-to-video Mean Rank 34.0 # 2
Video Retrieval MSR-VTT-1kA CLIP4Clip text-to-video Mean Rank 15.3 # 19
text-to-video R@10 81.6 # 34
text-to-video Median Rank 2 # 10
video-to-text R@1 42.7 # 22
video-to-text R@5 70.9 # 20
video-to-text R@10 80.6 # 21
video-to-text Median Rank 2 # 7
Zero-Shot Video Retrieval MSVD CLIP4Clip text-to-video R@1 38.5 # 10
text-to-video R@5 66.9 # 10
text-to-video R@10 76.8 # 10
text-to-video Median Rank 2 # 3
text-to-video Mean Rank 17.8 # 1
Video Retrieval MSVD CLIP4Clip text-to-video R@1 46.2 # 19
text-to-video R@5 76.1 # 16
text-to-video R@10 84.6 # 15
text-to-video Median Rank 2 # 8
text-to-video Mean Rank 10.0 # 12
video-to-text R@1 62.0 # 12
video-to-text R@5 87.3 # 10
video-to-text R@10 92.6 # 10
video-to-text Median Rank 1 # 1

Methods