Salient object detection is a task based on a visual attention mechanism, in which algorithms aim to explore objects or regions more attentive than the surrounding areas on the scene or images.
( Image credit: Attentive Feedback Network for Boundary-Aware Salient Object Detection )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e. g., CIFAR-100 and ImageNet.
#3 best model for Salient Object Detection on ECSSD
In this paper, we design a simple yet powerful deep network architecture, U$^2$-Net, for salient object detection (SOD).
We further design a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down pathway.
SOTA for Salient Object Detection on SOD
As an important problem in computer vision, salient object detection (SOD) from images has been attracting an increasing amount of research effort over the years.
In this paper, we propose a novel Cascaded Partial Decoder (CPD) framework for fast and accurate salient object detection.
SOTA for Salient Object Detection on SBU
This paper proposes a fast video salient object detection model, based on a novel recurrent network architecture, named Pyramid Dilated Bidirectional ConvLSTM (PDB-ConvLSTM).
Recent deep learning based salient object detection methods achieve gratifying performance built upon Fully Convolutional Neural Networks (FCNs).
Furthermore, different from binary cross entropy, the proposed PPA loss doesn't treat pixels equally, which can synthesize the local structure information of a pixel to guide the network to focus more on local details.
In this paper, we develop a multi-task motion guided video salient object detection network, which learns to accomplish two sub-tasks using two sub-networks, one sub-network for salient object detection in still images and the other for motion saliency detection in optical flow images.