Clover: Towards A Unified Video-Language Alignment and Fusion Model

Building a universal Video-Language model for solving various video understanding tasks (\emph{e.g.}, text-video retrieval, video question answering) is an open challenge to the machine learning field. Towards this goal, most recent works build the model by stacking uni-modal and cross-modal feature encoders and train it with pair-wise contrastive pre-text tasks. Though offering attractive generality, the resulted models have to compromise between efficiency and performance. They mostly adopt different architectures to deal with different downstream tasks. We find this is because the pair-wise training cannot well \emph{align} and \emph{fuse} features from different modalities. We then introduce \textbf{Clover}\textemdash a Correlated Video-Language pre-training method\textemdash towards a universal Video-Language model for solving multiple video understanding tasks with neither performance nor efficiency compromise. It improves cross-modal feature alignment and fusion via a novel tri-modal alignment pre-training task. Additionally, we propose to enhance the tri-modal alignment via incorporating learning from semantic masked samples and a new pair-wise ranking loss. Clover establishes new state-of-the-arts on multiple downstream tasks, including three retrieval tasks for both zero-shot and fine-tuning settings, and eight video question answering tasks. Codes and pre-trained models will be released at \url{https://github.com/LeeYN-43/Clover}.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Video Retrieval DiDeMo Clover text-to-video R@1 29.5 # 16
text-to-video R@5 55.2 # 16
text-to-video R@10 66.3 # 16
text-to-video Median Rank 4 # 3
Video Retrieval DiDeMo Clover text-to-video R@1 50.1 # 23
text-to-video R@5 76.7 # 22
text-to-video R@10 85.6 # 16
text-to-video Median Rank 1 # 1
Zero-Shot Video Retrieval LSMDC Clover text-to-video R@1 14.7 # 11
text-to-video R@5 29.2 # 10
text-to-video R@10 38.2 # 10
text-to-video Median Rank 24 # 1
Video Retrieval LSMDC Clover text-to-video R@1 24.8 # 18
text-to-video R@5 44 # 14
text-to-video R@10 54.5 # 12
text-to-video Median Rank 8 # 6
Video Question Answering LSMDC-FiB Clover Accuracy 54.1 # 1
Video Question Answering LSMDC-MC Clover Accuracy 83.7 # 2
Zero-Shot Video Retrieval MSR-VTT Clover text-to-video R@1 26.4 # 20
text-to-video R@5 49.5 # 19
text-to-video R@10 60 # 18
text-to-video Median Rank 6 # 4
Video Retrieval MSR-VTT-1kA Clover text-to-video R@1 40.5 # 36
text-to-video R@5 69.8 # 34
text-to-video R@10 79.4 # 37
text-to-video Median Rank 2 # 10
Video Question Answering MSRVTT-MC Clover Accuracy 95.2 # 4
Visual Question Answering (VQA) MSRVTT-QA Clover Accuracy 0.441 # 18
Visual Question Answering (VQA) MSVD-QA Clover Accuracy 0.524 # 19
TGIF-Frame TGIF-QA Clover Accuracy 71.6 # 9
TGIF-Action TGIF-QA Clover Accuracy 95 # 4
TGIF-Transition TGIF-QA Clover Accuracy 98.2 # 4

Methods