Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization

Localizing persons and recognizing their actions from videos is a challenging task towards high-level video understanding. Recent advances have been achieved by modeling direct pairwise relations between entities. In this paper, we take one step further, not only model direct relations between pairs but also take into account indirect higher-order relations established upon multiple elements. We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context. To this end, we design an Actor-Context-Actor Relation Network (ACAR-Net) which builds upon a novel High-order Relation Reasoning Operator and an Actor-Context Feature Bank to enable indirect relation reasoning for spatio-temporal action localization. Experiments on AVA and UCF101-24 datasets show the advantages of modeling actor-context-actor relations, and visualization of attention maps further verifies that our model is capable of finding relevant higher-order relations to support action detection. Notably, our method ranks first in the AVA-Kineticsaction localization task of ActivityNet Challenge 2020, out-performing other entries by a significant margin (+6.71mAP). Training code and models will be available at https://github.com/Siyu-C/ACAR-Net.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Spatio-Temporal Action Localization AVA-Kinetics ACAR (multi-scale, ensemble) val mAP 40.49 # 5
test mAP 39.62 # 1
Action Recognition AVA v2.1 ACAR-Net, SlowFast R-101 (Kinetics-400 pretraining) mAP (Val) 30.0 # 2
Action Recognition AVA v2.2 ACAR-Net, SlowFast R-101 (Kinetics-700 pretraining) mAP 31.72 # 24

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Spatio-Temporal Action Localization AVA-Kinetics ACAR (multi-scale, R-101, 8 × 8) val mAP 36.36 # 7

Methods


No methods listed for this paper. Add relevant methods here