1 code implementation • ECCV 2020 • Miao Zhang, Sun Xiao Fei, Jie Liu, Shuang Xu, Yongri Piao, Huchuan Lu
In this paper, we propose an asymmetric two-stream architecture taking account of the inherent differences between RGB and depth data for saliency detection.
Ranked #19 on Thermal Image Segmentation on RGB-T-Glass-Segmentation
1 code implementation • ICCV 2021 • Yongri Piao, Jian Wang, Miao Zhang, Huchuan Lu
The multiple accurate cues from multiple DFs are then simultaneously propagated to the saliency network with a multi-guidance loss.
1 code implementation • NeurIPS 2021 • Jingjing Li, Wei Ji, Qi Bi, Cheng Yan, Miao Zhang, Yongri Piao, Huchuan Lu, Li Cheng
As a by-product, a CapS dataset is constructed by augmenting existing benchmark training set with additional image tags and captions.
no code implementations • 4 Sep 2021 • Yongri Piao, Jian Wang, Miao Zhang, Zhengxuan Ma, Huchuan Lu
Despite of the success of previous works, explorations on an effective training strategy for the saliency network and accurate matches between image-level annotations and salient objects are still inadequate.
1 code implementation • CVPR 2021 • Wei Ji, Jingjing Li, Shuang Yu, Miao Zhang, Yongri Piao, Shunyu Yao, Qi Bi, Kai Ma, Yefeng Zheng, Huchuan Lu, Li Cheng
Complex backgrounds and similar appearances between objects and their surroundings are generally recognized as challenging scenarios in Salient Object Detection (SOD).
Ranked #13 on Thermal Image Segmentation on RGB-T-Glass-Segmentation
1 code implementation • 13 Apr 2021 • Yongri Piao, Xinxin Ji, Miao Zhang, Yukun Zhang
We first excavate the internal spatial correlation by designing a context reasoning unit which separately extracts comprehensive contextual information from the focal stack and RGB images.
no code implementations • 13 Apr 2021 • Yongri Piao, Yukun Zhang, Miao Zhang, Xinxin Ji
Focus based methods have shown promising results for the task of depth estimation.
1 code implementation • ICCV 2021 • Miao Zhang, Jie Liu, Yifei Wang, Yongri Piao, Shunyu Yao, Wei Ji, Jingjing Li, Huchuan Lu, Zhongxuan Luo
Our bidirectional dynamic fusion strategy encourages the interaction of spatial and temporal information in a dynamic manner.
Ranked #12 on Video Polyp Segmentation on SUN-SEG-Easy (Unseen)
no code implementations • 30 Dec 2020 • Yongri Piao, Zhengkun Rong, Shuang Xu, Miao Zhang, Huchuan Lu
The success of learning-based light field saliency detection is heavily dependent on how a comprehensive dataset can be constructed for higher generalizability of models, how high dimensional light field data can be effectively exploited, and how a flexible model can be designed to achieve versatility for desktop computers and mobile devices.
2 code implementations • ECCV 2020 • Wei Ji, Jingjing Li, Miao Zhang, Yongri Piao, Huchuan Lu
The explicitly extracted edge information goes together with saliency to give more emphasis to the salient regions and object boundaries.
Ranked #19 on RGB-D Salient Object Detection on NJU2K
1 code implementation • ECCV 2020 • Chongyi Li, Runmin Cong, Yongri Piao, Qianqian Xu, Chen Change Loy
Second, we propose an adaptive feature selection (AFS) module to select saliency-related features and suppress the inferior ones.
Ranked #8 on RGB-D Salient Object Detection on NJU2K
1 code implementation • CVPR 2020 • Yongri Piao, Zhengkun Rong, Miao Zhang, Weisong Ren, Huchuan Lu
Existing state-of-the-art RGB-D salient object detection methods explore RGB-D data relying on a two-stream architecture, in which an independent subnetwork is required to process depth data.
Ranked #19 on RGB-D Salient Object Detection on NJU2K (Average MAE metric, using extra training data)
1 code implementation • NeurIPS 2019 • Miao Zhang, Jingjing Li, Ji Wei, Yongri Piao, Huchuan Lu
In this paper, we present a deep-learning-based method where a novel memory-oriented decoder is tailored for light field saliency detection.
1 code implementation • ICCV 2019 • Yongri Piao, Wei Ji, Jingjing Li, Miao Zhang, Huchuan Lu
In this work, we propose a novel depth-induced multi-scale recurrent attention network for saliency detection.
Ranked #21 on RGB-D Salient Object Detection on NJU2K (using extra training data)