Saliency Detection via Global Context Enhanced Feature Fusion and Edge Weighted Loss

13 Oct 2021  ·  Chaewon Park, Minhyeok Lee, MyeongAh Cho, Sangyoun Lee ·

UNet-based methods have shown outstanding performance in salient object detection (SOD), but are problematic in two aspects. 1) Indiscriminately integrating the encoder feature, which contains spatial information for multiple objects, and the decoder feature, which contains global information of the salient object, is likely to convey unnecessary details of non-salient objects to the decoder, hindering saliency detection. 2) To deal with ambiguous object boundaries and generate accurate saliency maps, the model needs additional branches, such as edge reconstructions, which leads to increasing computational cost. To address the problems, we propose a context fusion decoder network (CFDN) and near edge weighted loss (NEWLoss) function. The CFDN creates an accurate saliency map by integrating global context information and thus suppressing the influence of the unnecessary spatial information. NEWLoss accelerates learning of obscure boundaries without additional modules by generating weight maps on object boundaries. Our method is evaluated on four benchmarks and achieves state-of-the-art performance. We prove the effectiveness of the proposed method through comparative experiments.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
RGB Salient Object Detection DUTS-TE CFDN MAE 0.048 # 17
max F-measure 0.859 # 11
S-Measure 0.871 # 12
RGB Salient Object Detection ECSSD CFDN MAE 0.033 # 7
F-measure 0.951 # 3
S-Measure 0.932 # 5
RGB Salient Object Detection PASCAL-S CFDN MAE 0.039 # 1
F-measure 0.891 # 3
S-Measure 0.894 # 1

Methods


No methods listed for this paper. Add relevant methods here