no code implementations • 3 Feb 2024 • Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z. Li, Laurence T. Yang
Experimental results demonstrate that our method enhances the expressive capacity of existing point cloud models and effectively addresses the issue of information leakage.
no code implementations • 3 Feb 2024 • Zhe Li, Laurence T. Yang, Bocheng Ren, Xin Nie, Zhangyang Gao, Cheng Tan, Stan Z. Li
The scarcity of annotated data has sparked significant interest in unsupervised pre-training methods that leverage medical reports as auxiliary signals for medical visual representation learning.
no code implementations • 25 Oct 2023 • Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z. Li, Laurence T. Yang
This model is versatile, allowing fine-tuning for downstream point cloud representation tasks, as well as unconditional and conditional generation tasks.
1 code implementation • 12 Oct 2022 • Dehua Zheng, Xiaochen Zheng, Laurence T. Yang, Yuan Gao, Chenlu Zhu, Yiheng Ruan
In addition, our MFFN exploits the dependence and interaction between views and channels.