Visual Saliency Transformer

ICCV 2021  ·  Nian Liu, Ni Zhang, Kaiyuan Wan, Ling Shao, Junwei Han ·

Existing state-of-the-art saliency detection methods heavily rely on CNN-based architectures. Alternatively, we rethink this task from a convolution-free sequence-to-sequence perspective and predict saliency by modeling long-range dependencies, which can not be achieved by convolution. Specifically, we develop a novel unified model based on a pure transformer, namely, Visual Saliency Transformer (VST), for both RGB and RGB-D salient object detection (SOD). It takes image patches as inputs and leverages the transformer to propagate global contexts among image patches. Unlike conventional architectures used in Vision Transformer (ViT), we leverage multi-level token fusion and propose a new token upsampling method under the transformer framework to get high-resolution detection results. We also develop a token-based multi-task decoder to simultaneously perform saliency and boundary detection by introducing task-related tokens and a novel patch-task-attention mechanism. Experimental results show that our model outperforms existing methods on both RGB and RGB-D SOD benchmark datasets. Most importantly, our whole framework not only provides a new perspective for the SOD field but also shows a new paradigm for transformer-based dense prediction models. Code is available at https://github.com/nnizhang/VST.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
RGB-D Salient Object Detection NJUD VST S-Measure 0.922 # 1
RGB-D Salient Object Detection NLPR VST S-Measure 0.932 # 13
Thermal Image Segmentation RGB-T-Glass-Segmentation VST MAE 0.044 # 7
RGB-D Salient Object Detection SIP VST S-Measure 90.4 # 2
max E-Measure 94.4 # 3
max F-Measure 91.5 # 2
Average MAE 0.040 # 2

Methods