Salient object detection is a task based on a visual attention mechanism, in which algorithms aim to explore objects or regions more attentive than the surrounding areas on the scene or images.
( Image credit: Attentive Feedback Network for Boundary-Aware Salient Object Detection )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Salient object detection in complex scenes and environments is a challenging research topic.
Co-salient object detection (CoSOD) is a newly emerging and rapidly growing branch of salient object detection (SOD), which aims to detect the co-occurring salient objects in multiple images.
This paper proposes a novel joint learning and densely-cooperative fusion (JL-DCF) architecture for RGB-D salient object detection.
In this paper, we propose a weakly-supervised salient object detection model to learn saliency from such annotations.
Though remarkable progress has been achieved, we observe that the closer the pixel is to the edge, the more difficult it is to be predicted, because edge pixels have a very imbalance distribution.
To solve this, many recent RGBD-based networks are proposed by adopting the depth map as an independent input and fuse the features with RGB information.
Our network relies on an encoder-decoder for the feature extraction and fusion, and we design a multi-interaction block (MIB) to model the interactions of different modalities, different layers and local-global information.
To better explore salient information in both foreground and background regions, this paper proposes a Bilateral Attention Network (BiANet) for the RGB-D SOD task.