no code implementations • 6 Dec 2023 • Zhimiao Yu, Tiancheng Lin, Yi Xu
In this paper, we propose a new pre-training scheme for FSS via decoupling the novel classes from background, called Background Clustering Pre-Training (BCPT).
no code implementations • 8 Sep 2023 • Hongyu Hu, Tiancheng Lin, Jie Wang, Zhenbang Sun, Yi Xu
To achieve this, we introduce a pre-trained LLM to generate context descriptions, and we encourage the prompts to learn from the LLM's knowledge by alignment, as well as the alignment between prompts and local image features.
1 code implementation • 1 Aug 2023 • Jinglei Zhang, Tiancheng Lin, Yi Xu, Kai Chen, Rui Zhang
We argue that such prior contextual information can be interpreted as the relations of textual primitives due to the heterogeneous text and background, which can provide effective self-supervised labels for representation learning.
1 code implementation • 20 Jul 2023 • Zhimiao Yu, Tiancheng Lin, Yi Xu
Specifically, we iteratively perform intra-slide clustering for the regions (4096x4096 patches) within each WSI to yield the prototypes and encourage the region representations to be closer to the assigned prototypes.
1 code implementation • CVPR 2023 • Tiancheng Lin, Zhimiao Yu, Hongyu Hu, Yi Xu, Chang Wen Chen
This deficiency is a confounder that limits the performance of existing MIL methods.
no code implementations • 20 Apr 2022 • Renhui Zhang, Tiancheng Lin, Rui Zhang, Yi Xu
Benchmark datasets for visual recognition assume that data is uniformly distributed, while real-world datasets obey long-tailed distribution.
no code implementations • 20 Apr 2022 • Tiancheng Lin, Hongteng Xu, Canqian Yang, Yi Xu
When applying multi-instance learning (MIL) to make predictions for bags of instances, the prediction accuracy of an instance often depends on not only the instance itself but also its context in the corresponding bag.
no code implementations • 18 Apr 2022 • Yangrun Hu, Yuanfan Guo, Fan Zhang, Mingda Wang, Tiancheng Lin, Rong Wu, Yi Xu
Based on the insight that mass data is sufficient and shares the same knowledge structure with non-mass data of identifying the malignancy of a lesion based on the ultrasound image, we propose a novel transfer learning framework to enhance the generalizability of the DNN model for non-mass BUS with the help of mass BUS.
no code implementations • 18 Apr 2022 • Yuanfan Guo, Canqian Yang, Tiancheng Lin, Chunxiao Li, Rui Zhang, Yi Xu
Since an ultrasound image only describes a partial 2D projection of a 3D lesion, such paradigm ignores the semantic relationship between different views of a lesion, which is inconsistent with the traditional diagnosis where sonographers analyze a lesion from at least two views.
no code implementations • 3 Dec 2021 • Shengjia Zhang, Tiancheng Lin, Yi Xu
To avoid overfitting on source domain, at the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains so that the focus of the training stage is gradually shifted from source distribution to target distribution with prediction confidence boosted on the target domain.
1 code implementation • 8 Oct 2020 • Jiancheng Yang, Jiajun Chen, Kaiming Kuang, Tiancheng Lin, Junjun He, Bingbing Ni
Furthermore, we experiment the proposed method on an in-house, retrospective dataset of real-world non-small cell lung cancer patients under anti-PD-1 immunotherapy.
Ranked #1 on Text-To-Speech Synthesis on 20000 utterances (using extra training data)
no code implementations • 9 Apr 2020 • Tiancheng Lin, Yuanfan Guo, Canqian Yang, Jiancheng Yang, Yi Xu
Early diagnosis of signet ring cell carcinoma dramatically improves the survival rate of patients.