1 code implementation • 16 Apr 2024 • Songtao Jiang, Tuo Zheng, Yan Zhang, Yeying Jin, Zuozhu Liu
Mixture of Expert Tuning (MoE-Tuning) has effectively enhanced the performance of general MLLMs with fewer parameters, yet its application in resource-limited medical settings has not been fully explored.
2 code implementations • 6 Apr 2024 • Songtao Jiang, Yan Zhang, Chenyi Zhou, Yeying Jin, Yang Feng, Jian Wu, Zuozhu Liu
In this paper, we present a novel approach, Joint Visual and Text Prompting (VTPrompt), that employs fine-grained visual information to enhance the capability of MLLMs in VQA, especially for object-oriented perception.
no code implementations • 15 Mar 2024 • Cong Wang, Jinshan Pan, Yeying Jin, Liyan Wang, Wei Wang, Gang Fu, Wenqi Ren, Xiaochun Cao
Our designs provide a closer look at the attention mechanism and reveal that some simple operations can significantly affect the model performance.
no code implementations • 12 Mar 2024 • Beibei Lin, Yeying Jin, Wending Yan, Wei Ye, Yuan Yuan, Robby T. Tan
By increasing the noise values to approach as high as the pixel intensity values of the glow and light effect blended images, our augmentation becomes severe, resulting in stronger priors.
1 code implementation • 29 Feb 2024 • Bingchen Li, Xin Li, Hanxin Zhu, Yeying Jin, Ruoyu Feng, Zhizheng Zhang, Zhibo Chen
In particular, one discriminator is utilized to enable the SR network to learn the distribution of real-world high-quality images in an adversarial training manner.
no code implementations • 3 Feb 2024 • Zhuoran Zheng, Chen Wu, Wei Wang, Yeying Jin, Xiuyi Jia
In this paper, we unfold a new perspective on polyp segmentation modeling by leveraging the Depth Anything Model (DAM) to provide depth prior to polyp segmentation models.
no code implementations • 1 Jan 2024 • Beibei Lin, Yeying Jin, Wending Yan, Wei Ye, Yuan Yuan, Shunli Zhang, Robby Tan
However, the intricacies of the real world, particularly with the presence of light effects and low-light regions affected by noise, create significant domain gaps, hampering synthetic-trained models in removing rain streaks properly and leading to over-saturation and color shifts.
1 code implementation • 3 Aug 2023 • Yeying Jin, Beibei Lin, Wending Yan, Yuan Yuan, Wei Ye, Robby T. Tan
In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions.
1 code implementation • 27 Nov 2022 • Yeying Jin, Ruoteng Li, Wenhan Yang, Robby T. Tan
To further enforce the reflectance layer to be independent of shadows and specularities in the second-stage refinement, we introduce an S-Aware network that distinguishes the reflectance image from the input image.
1 code implementation • 15 Nov 2022 • Yeying Jin, Wei Ye, Wenhan Yang, Yuan Yuan, Robby T. Tan
Most existing methods rely on binary shadow masks, without considering the ambiguous boundaries of soft and self shadows.
1 code implementation • 6 Oct 2022 • Yeying Jin, Wending Yan, Wenhan Yang, Robby T. Tan
Few existing image defogging or dehazing methods consider dense and non-uniform particle distributions, which usually happen in smoke, dust and fog.
Ranked #1 on Image Dehazing on O-Haze
1 code implementation • 21 Jul 2022 • Yeying Jin, Wenhan Yang, Robby T. Tan
To address this problem, we need to suppress the light effects in bright regions while, at the same time, boosting the intensity of dark regions.
Ranked #23 on Low-Light Image Enhancement on LOL
1 code implementation • ICCV 2021 • Yeying Jin, Aashish Sharma, Robby T. Tan
To address the problem, in this paper, we propose an unsupervised domain-classifier guided shadow removal network, DC-ShadowNet.
Ranked #2 on Shadow Removal on SRD