RGB Salient object detection is a task-based on a visual attention mechanism, in which algorithms aim to explore objects or regions more attentive than the surrounding areas on the scene or RGB images.
( Image credit: Attentive Feedback Network for Boundary-Aware Salient Object Detection )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e. g., CIFAR-100 and ImageNet.
Ranked #4 on RGB Salient Object Detection on ECSSD
In this paper, we design a simple yet powerful deep network architecture, U$^2$-Net, for salient object detection (SOD).
In this paper, we propose a predict-refine architecture, BASNet, and a new hybrid loss for Boundary-Aware Salient object detection.
Ranked #2 on RGB Salient Object Detection on ECSSD
We further design a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down pathway.
Ranked #1 on RGB Salient Object Detection on PASCAL-S
As an essential problem in computer vision, salient object detection (SOD) has attracted an increasing amount of research effort over the years.
In the second step, we integrate the local edge information and global location information to obtain the salient edge features.
Ranked #2 on Camouflaged Object Segmentation on COD
In this paper, we propose a novel Cascaded Partial Decoder (CPD) framework for fast and accurate salient object detection.
Ranked #1 on RGB Salient Object Detection on ISTD
We formulate the proposed PiCANet in both global and local forms to attend to global and local contexts, respectively.
Ranked #5 on RGB Salient Object Detection on DUTS-TE
This is the first work that explicitly emphasizes the challenge of saliency shift, i. e., the video salient object(s) may dynamically change.
Ranked #1 on Video Salient Object Detection on FBMS-59 (using extra training data)
Furthermore, different from binary cross entropy, the proposed PPA loss doesn't treat pixels equally, which can synthesize the local structure information of a pixel to guide the network to focus more on local details.