Learning Target Candidate Association to Keep Track of What Not to Track

The presence of objects that are confusingly similar to the tracked target, poses a fundamental challenge in appearance-based visual tracking. Such distractor objects are easily misclassified as the target itself, leading to eventual tracking failure. While most methods strive to suppress distractors through more powerful appearance models, we take an alternative approach. We propose to keep track of distractor objects in order to continue tracking the target. To this end, we introduce a learned association network, allowing us to propagate the identities of all target candidates from frame-to-frame. To tackle the problem of lacking ground-truth correspondences between distractor objects in visual tracking, we propose a training strategy that combines partial annotations with self-supervision. We conduct comprehensive experimental validation and analysis of our approach on several challenging datasets. Our tracker sets a new state-of-the-art on six benchmarks, achieving an AUC score of 67.1% on LaSOT and a +5.8% absolute gain on the OxUvA long-term dataset.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Tracking COESOT KeepTrack Success Rate 59.6 # 9
Precision Rate 66.1 # 12
Visual Object Tracking LaSOT KeepTrack AUC 67.1 # 21
Normalized Precision 77.2 # 16
Precision 70.2 # 17
Visual Object Tracking LaSOT-ext KeepTrack AUC 48.2 # 9
Visual Object Tracking OTB-2015 KeepTrack AUC 0.709 # 4
Visual Object Tracking UAV123 KeepTrack AUC 0.697 # 7

Methods


No methods listed for this paper. Add relevant methods here