no code implementations • 26 Apr 2024 • Yushen Xu, Xiaosong Li, Yuchan Jie, Haishu Tan
In clinical practice, tri-modal medical image fusion, compared to the existing dual-modal technique, can provide a more comprehensive view of the lesions, aiding physicians in evaluating the disease's shape, location, and biological activity.
no code implementations • 3 Feb 2024 • Xilai Li, Xiaosong Li, Haishu Tan
Infrared and visible image fusion has emerged as a prominent research in computer vision.
no code implementations • 3 Feb 2024 • Xilai Li, Wuyang Liu, Xiaosong Li, Haishu Tan
To bridge this research gap, we proposed an all-weather MMIF model.
no code implementations • 2 Feb 2024 • Yuchan Jie, Yushen Xu, Xiaosong Li, Haishu Tan
Multi-modality image fusion involves integrating complementary information from different modalities into a single image.
1 code implementation • 16 Jan 2024 • Xilai Li, Xiaosong Li, Haishu Tan, Jinyang Li
Existing multi-focus image fusion (MFIF) methods often fail to preserve the uncertain transition region and detect small focus areas within large defocused regions accurately.
1 code implementation • 3 Nov 2023 • Xilai Li, Xiaosong Li, Tao Ye, Xiaoqi Cheng, Wuyang Liu, Haishu Tan
However, the fusion of multiple visible images with different focal regions and infrared images is a unprecedented challenge in real MMIF applications.
no code implementations • Knowledge-Based Systems 2022 • Changan Yi, Haotian Chen, Yonghui Xu, Yong liu, Lei Jiang, Haishu Tan
Accordingly, ATPL will use the pseudo-labeled information to improve the adversarial training process, which can guarantee the feature transferability by generating adversarial data to fill in the domain gap.