Short-term anchor linking and long-term self-guided attention for video object detection

18 Apr 2021  ·  Daniel Cores, Víctor M Brea, Manuel Mucientes ·

We present a new network architecture able to take advantage of spatio-temporal information available in videos to boost object detection precision. First, box features are associated and aggregated by linking proposals that come from the same anchor box in the nearby frames. Then, we design a new attention module that aggregates short-term enhanced box features to exploit long-term spatio-temporal information. This module takes advantage of geometrical features in the long-term for the first time in the video object detection domain. Finally, a spatio-temporal double head is fed with both spatial information from the reference frame and the aggregated information that takes into account the short- and long-term temporal context. We have tested our proposal in five video object detection datasets with very different characteristics, in order to prove its robustness in a wide number of scenarios. Non-parametric statistical tests show that our approach outperforms the state-of-the-art. Our code is available at https://github.com/daniel-cores/SLTnet.

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Video Object Detection ImageNet VID SLTnet FPN-X101 MAP 82.4 # 21
Video Object Detection USC-GRAD-STDdb SLTnet FPN-X101 AP 0.5 44.9 # 1
AP 16.6 # 1

Methods


No methods listed for this paper. Add relevant methods here