no code implementations • 24 Apr 2024 • Jiaxin Zhuang, Linshan Wu, Qiong Wang, Varut Vardhanabhuti, Lin Luo, Hao Chen
We further scale up the MiM to large pre-training datasets with more than 10k volumes, showing that large-scale pre-training can further enhance the performance of downstream tasks.
1 code implementation • 20 Mar 2024 • Linshan Wu, Zhun Zhong, Jiayi Ma, Yunchao Wei, Hao Chen, Leyuan Fang, Shutao Li
Based on the label distributions, we leverage the GMM to generate high-quality pseudo labels for more reliable supervision.
Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation
1 code implementation • 27 Feb 2024 • Linshan Wu, Jiaxin Zhuang, Hao Chen
Through this pretext task, VoCo implicitly encodes the contextual position priors into model representations without the guidance of annotations, enabling us to effectively improve the performance of downstream tasks that require high-level semantics.
1 code implementation • 9 Jan 2024 • Linshan Wu, Ming Lu, Leyuan Fang
Compared with the existing category alignment methods, our CR aims to regularize the correlation between different dimensions of the features and thus performs more robustly when dealing with the divergent category features of imbalanced and inconsistent distributions.
1 code implementation • 2 Oct 2023 • Jiaxin Zhuang, Luyang Luo, Zhixuan Chen, Linshan Wu
Initially, a deep model (nn-UNet) trained on datasets with complete organ annotations (about 220 scans) generates pseudo labels for the whole dataset.
1 code implementation • CVPR 2023 • Linshan Wu, Zhun Zhong, Leyuan Fang, Xingxin He, Qiang Liu, Jiayi Ma, Hao Chen
Our AGMM can effectively endow reliable supervision for unlabeled pixels based on the distributions of labeled and unlabeled pixels.