no code implementations • 28 Apr 2022 • Yang Liu, Ying Tan, Haoyuan Lan
To learn supervised information from unlabeled videos, we propose a novel self-supervised contrastive learning module (SelfCL).
2 code implementations • 7 Dec 2021 • Yang Liu, Keze Wang, Lingbo Liu, Haoyuan Lan, Liang Lin
To overcome these limitations, we take advantage of the multi-scale temporal dependencies within videos and proposes a novel video self-supervised learning framework named Temporal Contrastive Graph Learning (TCGL), which jointly models the inter-snippet and intra-snippet temporal dependencies for temporal representation learning with a hybrid graph contrastive learning strategy.
no code implementations • 4 Jan 2021 • Yang Liu, Keze Wang, Haoyuan Lan, Liang Lin
To model multi-scale temporal dependencies, our TCGL integrates the prior knowledge about the frame and snippet orders into graph structures, i. e., the intra-/inter- snippet temporal contrastive graphs.