A Large-Scale Analysis on Self-Supervised Video Representation Learning

9 Jun 2023  ·  Akash Kumar, Ashlesha Kumar, Vibhav Vineet, Yogesh Singh Rawat ·

Self-supervised learning is an effective way for label-free model pre-training, especially in the video domain where labeling is expensive. Existing self-supervised works in the video domain use varying experimental setups to demonstrate their effectiveness and comparison across approaches becomes challenging with no standard benchmark. In this work, we first provide a benchmark that enables a comparison of existing approaches on the same ground. Next, we study five different aspects of self-supervised learning important for videos; 1) dataset size, 2) complexity, 3) data distribution, 4) data noise, and, 5)feature analysis. To facilitate this study, we focus on seven different methods along with seven different network architectures and perform an extensive set of experiments on 5 different datasets with an evaluation of two different downstream tasks. We present several interesting insights from this study which span across different properties of pretraining and target datasets, pretext-tasks, and model architectures among others. We further put some of these insights to the real test and propose an approach that requires a limited amount of training data and outperforms existing state-of-the-art approaches which use 10x pretraining data. We believe this work will pave the way for researchers to a better understanding of self-supervised pretext tasks in video representation learning.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Self-Supervised Action Recognition UCF101 SSL-KD (R21D-18) 3-fold Accuracy 97.3 # 3
Pre-Training Dataset Kinetics400 # 1
Frozen false # 1

Methods