no code implementations • 18 Mar 2024 • Hongxiao Wang, Yang Yang, Zhuo Zhao, Pengfei Gu, Nishchal Sapkota, Danny Z. Chen
For predicting cancer survival outcomes, standard approaches in clinical research are often based on two main modalities: pathology images for observing cell morphology features, and genomic (e. g., bulk RNA-seq) for quantifying gene expressions.
1 code implementation • 6 Feb 2024 • Nishchal Sapkota, Yejia Zhang, Sirui Li, Peixian Liang, Zhuo Zhao, Jingjing Zhang, Xiaomin Zha, Yiru Zhou, Yunxia Cao, Danny Z Chen
We propose a new approach for sperm head morphology classification, called SHMC-Net, which uses segmentation masks of sperm heads to guide the morphology classification of sperm images.
no code implementations • 6 Feb 2024 • Nishchal Sapkota, Yejia Zhang, Susan M. Motch Perrine, Yuhan Hsi, Sirui Li, Meng Wu, Greg Holmes, Abdul R. Abdulai, Ethylin W. Jabs, Joan T. Richtsmeier, Danny Z Chen
Experiments on the mice cartilage dataset show the superiority of our new model compared to other competitive segmentation models.
1 code implementation • 23 Jul 2023 • Yejia Zhang, Pengfei Gu, Nishchal Sapkota, Danny Z. Chen
Modern medical image segmentation methods primarily use discrete representations in the form of rasterized masks to learn features and generate predictions.
no code implementations • 16 Nov 2022 • Yejia Zhang, Nishchal Sapkota, Pengfei Gu, Yaopeng Peng, Hao Zheng, Danny Z. Chen
Understanding of spatial attributes is central to effective 3D radiology image analysis where crop-based learning is the de facto standard.
no code implementations • 15 Nov 2022 • Yejia Zhang, Xinrong Hu, Nishchal Sapkota, Yiyu Shi, Danny Z. Chen
Self-supervised instance discrimination is an effective contrastive pretext task to learn feature representations and address limited medical image annotations.
no code implementations • 15 Nov 2022 • Yejia Zhang, Pengfei Gu, Nishchal Sapkota, Hao Zheng, Peixian Liang, Danny Z. Chen
High annotation costs and limited labels for dense 3D medical imaging tasks have recently motivated an assortment of 3D self-supervised pretraining methods that improve transfer learning performance.