RGB-D Salient Object Detection

56 papers with code • 8 benchmarks • 5 datasets

RGB-D Salient object detection (SOD) aims at distinguishing the most visually distinctive objects or regions in a scene from the given RGB and Depth data. It has a wide range of applications, including video/image segmentation, object recognition, visual tracking, foreground maps evaluation, image retrieval, content-aware image editing, information discovery, photosynthesis, and weakly supervised semantic segmentation. Here, depth information plays an important complementary role in finding salient objects. Online benchmark: http://dpfan.net/d3netbenchmark.

( Image credit: Rethinking RGB-D Salient Object Detection: Models, Data Sets, and Large-Scale Benchmarks, TNNLS20 )

Libraries

Use these libraries to find RGB-D Salient Object Detection models and implementations

Latest papers with no code

A Saliency Enhanced Feature Fusion based multiscale RGB-D Salient Object Detection Network

no code yet • 22 Jan 2024

SEFF utilizes saliency maps of the neighboring scales to enhance the necessary features for fusing, resulting in more representative fused features.

Decomposed Guided Dynamic Filters for Efficient RGB-Guided Depth Completion

no code yet • 5 Sep 2023

The decomposed filters not only maintain the favorable properties of guided dynamic filters as being content-dependent and spatially-variant, but also reduce model parameters and hardware costs, as the learned adaptors are decoupled with the number of feature channels.

HODINet: High-Order Discrepant Interaction Network for RGB-D Salient Object Detection

no code yet • 3 Jul 2023

Specifically, we design a high-order spatial fusion (HOSF) module and a high-order channel fusion (HOCF) module to fuse features of the first two and the last two stages, respectively.

RXFOOD: Plug-in RGB-X Fusion for Object of Interest Detection

no code yet • 22 Jun 2023

The emergence of different sensors (Near-Infrared, Depth, etc.)

Hierarchical Cross-modal Transformer for RGB-D Salient Object Detection

no code yet • 16 Feb 2023

Most of existing RGB-D salient object detection (SOD) methods follow the CNN-based paradigm, which is unable to model long-range dependencies across space and modalities due to the natural locality of CNNs.

HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness

no code yet • 18 Jan 2023

In this paper, from a new perspective, we propose a novel Hierarchical Depth Awareness network (HiDAnet) for RGB-D saliency detection.

SiaTrans: Siamese Transformer Network for RGB-D Salient Object Detection with Depth Image Classification

no code yet • 9 Jul 2022

Transformer-based cross-modality fusion module (CMF) can effectively fuse RGB and depth information.

Dynamic Message Propagation Network for RGB-D Salient Object Detection

no code yet • 20 Jun 2022

This paper presents a novel deep neural network framework for RGB-D salient object detection by controlling the message passing between the RGB images and depth maps on the feature level and exploring the long-range semantic contexts and geometric information on both RGB and depth features to infer salient objects.

Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient Object Detection

no code yet • 7 Jun 2022

Considering the inaccurate depth map issue, we collect the RGB features of early stages into a skip convolution module to give more guidance from RGB modality to the final saliency prediction.

GroupTransNet: Group Transformer Network for RGB-D Salient Object Detection

no code yet • 21 Mar 2022

The features of the intermediate process are first fused by the features of different layers, and then processed by several transformers in multiple groups, which not only makes the size of the features of each scale unified and interrelated, but also achieves the effect of sharing the weight of the features within the group.