TACNet: Transition-Aware Context Network for Spatio-Temporal Action Detection

CVPR 2019  ·  Lin Song, Shiwei Zhang, Gang Yu, Hongbin Sun ·

Current state-of-the-art approaches for spatio-temporal action detection have achieved impressive results but remain unsatisfactory for temporal extent detection. The main reason comes from that, there are some ambiguous states similar to the real actions which may be treated as target actions even by a well-trained network. In this paper, we define these ambiguous samples as "transitional states", and propose a Transition-Aware Context Network (TACNet) to distinguish transitional states. The proposed TACNet includes two main components, i.e., temporal context detector and transition-aware classifier. The temporal context detector can extract long-term context information with constant time complexity by constructing a recurrent network. The transition-aware classifier can further distinguish transitional states by classifying action and transitional states simultaneously. Therefore, the proposed TACNet can substantially improve the performance of spatio-temporal action detection. We extensively evaluate the proposed TACNet on UCF101-24 and J-HMDB datasets. The experimental results demonstrate that TACNet obtains competitive performance on JHMDB and significantly outperforms the state-of-the-art methods on the untrimmed UCF101-24 in terms of both frame-mAP and video-mAP.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
No code implementations yet. Submit your code now

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Detection J-HMDB TACNet Video-mAP 0.2 74.1 # 10
Video-mAP 0.5 73.4 # 11
Frame-mAP 0.5 65.5 # 7
Action Detection UCF101-24 TACNet Video-mAP 0.2 77.5 # 8
Video-mAP 0.5 52.9 # 7
Frame-mAP 0.5 72.1 # 9

Methods


No methods listed for this paper. Add relevant methods here