Background Suppression Network for Weakly-supervised Temporal Action Localization

22 Nov 2019  ·  Pilhyeon Lee, Youngjung Uh, Hyeran Byun ·

Weakly-supervised temporal action localization is a very challenging problem because frame-wise labels are not given in the training stage while the only hint is video-level labels: whether each video contains action frames of interest. Previous methods aggregate frame-level class scores to produce video-level prediction and learn from video-level action labels. This formulation does not fully model the problem in that background frames are forced to be misclassified as action classes to predict video-level labels accurately. In this paper, we design Background Suppression Network (BaS-Net) which introduces an auxiliary class for background and has a two-branch weight-sharing architecture with an asymmetrical training strategy. This enables BaS-Net to suppress activations from background frames to improve localization performance. Extensive experiments demonstrate the effectiveness of BaS-Net and its superiority over the state-of-the-art methods on the most popular benchmarks - THUMOS'14 and ActivityNet. Our code and the trained model are available at https://github.com/Pilhyeon/BaSNet-pytorch.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Weakly Supervised Action Localization ActivityNet-1.2 BaS-Net mAP@0.5 38.5 # 9
Weakly Supervised Action Localization ActivityNet-1.3 BaS-Net mAP@0.5 34.5 # 10
mAP@0.5:0.95 22.2 # 10
Weakly Supervised Action Localization THUMOS’14 BasNet mAP@0.5 27.0 # 11
Weakly Supervised Action Localization THUMOS 2014 BaS-Net mAP@0.5 27 # 16
mAP@0.1:0.7 35.3 # 17
mAP@0.1:0.5 43.6 # 16

Methods


No methods listed for this paper. Add relevant methods here