FANet: Quality-Aware Feature Aggregation Network for Robust RGB-T Tracking

24 Nov 2018  ·  Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang ·

This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGBT tracking). We propose a novel deep network architecture called qualityaware Feature Aggregation Network (FANet) for robust RGBT tracking. Unlike existing RGBT trackers, our FANet aggregates hierarchical deep features within each modality to handle the challenge of significant appearance changes caused by deformation, low illumination, background clutter and occlusion. In particular, we employ the operations of max pooling to transform these hierarchical and multi-resolution features into uniform space with the same resolution, and use 1x1 convolution operation to compress feature dimensions to achieve more effective hierarchical feature aggregation. To model the interactions between RGB and thermal modalities, we elaborately design an adaptive aggregation subnetwork to integrate features from different modalities based on their reliabilities and thus are able to alleviate noise effects introduced by low-quality sources. The whole FANet is trained in an end-to-end manner. Extensive experiments on large-scale benchmark datasets demonstrate the high-accurate performance against other state-of-the-art RGBT tracking methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods