Self-Supervised Learning for Semi-Supervised Temporal Action Proposal

Self-supervised learning presents a remarkable performance to utilize unlabeled data for various video tasks. In this paper, we focus on applying the power of self-supervised methods to improve semi-supervised action proposal generation. Particularly, we design an effective Self-supervised Semi-supervised Temporal Action Proposal (SSTAP) framework. The SSTAP contains two crucial branches, i.e., temporal-aware semi-supervised branch and relation-aware self-supervised branch. The semi-supervised branch improves the proposal model by introducing two temporal perturbations, i.e., temporal feature shift and temporal feature flip, in the mean teacher framework. The self-supervised branch defines two pretext tasks, including masked feature reconstruction and clip-order prediction, to learn the relation of temporal clues. By this means, SSTAP can better explore unlabeled videos, and improve the discriminative abilities of learned action features. We extensively evaluate the proposed SSTAP on THUMOS14 and ActivityNet v1.3 datasets. The experimental results demonstrate that SSTAP significantly outperforms state-of-the-art semi-supervised methods and even matches fully-supervised methods. Code is available at https://github.com/wangxiang1230/SSTAP.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Action Detection ActivityNet-1.3 SSTAP ( 60 % labeled, IoU thresh=0.5 ) mAP 50.1 # 2
Temporal Action Localization ActivityNet-1.3 SSTAP@100%+ mAP IOU@0.5 50.72 # 21
mAP 34.48 # 24
mAP IOU@0.75 35.28 # 16
mAP IOU@0.95 7.87 # 19
Semi-Supervised Action Detection THUMOS' 14 SSTAP ( 10 % labeled, IoU thresh=0.3 ) mAP 56.4 # 2

Methods


No methods listed for this paper. Add relevant methods here