VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning

21 Jun 2021  ·  Hao Tan, Jie Lei, Thomas Wolf, Mohit Bansal ·

Video understanding relies on perceiving the global content and modeling its internal connections (e.g., causality, movement, and spatio-temporal correspondence). To learn these interactions, we apply a mask-then-predict pre-training task on discretized video tokens generated via VQ-VAE. Unlike language, where the text tokens are more independent, neighboring video tokens typically have strong correlations (e.g., consecutive video frames usually look very similar), and hence uniformly masking individual tokens will make the task too trivial to learn useful representations. To deal with this issue, we propose a block-wise masking strategy where we mask neighboring video tokens in both spatial and temporal domains. We also add an augmentation-free contrastive learning method to further capture the global content by predicting whether the video clips are sampled from the same video. We pre-train our model on uncurated videos and show that our pre-trained model can reach state-of-the-art results on several video understanding datasets (e.g., SSV2, Diving48). Lastly, we provide detailed analyses on model scalability and pre-training method design. Code is released at https://github.com/airsplay/vimpac.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Recognition Diving-48 VIMPAC Accuracy 85.5 # 10
Action Recognition HMDB-51 VIMPAC Average accuracy of 3 splits 65.9 # 59
Action Classification Kinetics-400 VIMPAC Acc@1 77.4 # 132
Action Recognition Something-Something V2 VIMPAC Top-1 Accuracy 68.1 # 53
Action Recognition UCF101 VIMPAC 3-fold Accuracy 92.7 # 59

Methods