Search Results for author: Tiancheng Lin

Found 12 papers, 4 papers with code

Background Clustering Pre-training for Few-shot Segmentation

no code implementations6 Dec 2023 Zhimiao Yu, Tiancheng Lin, Yi Xu

In this paper, we propose a new pre-training scheme for FSS via decoupling the novel classes from background, called Background Clustering Pre-Training (BCPT).

Few-Shot Semantic Segmentation

Context-Aware Prompt Tuning for Vision-Language Model with Dual-Alignment

no code implementations8 Sep 2023 Hongyu Hu, Tiancheng Lin, Jie Wang, Zhenbang Sun, Yi Xu

To achieve this, we introduce a pre-trained LLM to generate context descriptions, and we encourage the prompts to learn from the LLM's knowledge by alignment, as well as the alignment between prompts and local image features.

Language Modelling Zero-Shot Learning

Relational Contrastive Learning for Scene Text Recognition

1 code implementation1 Aug 2023 Jinglei Zhang, Tiancheng Lin, Yi Xu, Kai Chen, Rui Zhang

We argue that such prior contextual information can be interpreted as the relations of textual primitives due to the heterogeneous text and background, which can provide effective self-supervised labels for representation learning.

Contrastive Learning Representation Learning +1

SLPD: Slide-level Prototypical Distillation for WSIs

1 code implementation20 Jul 2023 Zhimiao Yu, Tiancheng Lin, Yi Xu

Specifically, we iteratively perform intra-slide clustering for the regions (4096x4096 patches) within each WSI to yield the prototypes and encourage the region representations to be closer to the assigned prototypes.

Representation Learning Self-Supervised Learning

Solving The Long-Tailed Problem via Intra- and Inter-Category Balance

no code implementations20 Apr 2022 Renhui Zhang, Tiancheng Lin, Rui Zhang, Yi Xu

Benchmark datasets for visual recognition assume that data is uniformly distributed, while real-world datasets obey long-tailed distribution.

Interventional Multi-Instance Learning with Deconfounded Instance-Level Prediction

no code implementations20 Apr 2022 Tiancheng Lin, Hongteng Xu, Canqian Yang, Yi Xu

When applying multi-instance learning (MIL) to make predictions for bags of instances, the prediction accuracy of an instance often depends on not only the instance itself but also its context in the corresponding bag.

Causal Inference

Enhancing Non-mass Breast Ultrasound Cancer Classification With Knowledge Transfer

no code implementations18 Apr 2022 Yangrun Hu, Yuanfan Guo, Fan Zhang, Mingda Wang, Tiancheng Lin, Rong Wu, Yi Xu

Based on the insight that mass data is sufficient and shares the same knowledge structure with non-mass data of identifying the malignancy of a lesion based on the ultrasound image, we propose a novel transfer learning framework to enhance the generalizability of the DNN model for non-mass BUS with the help of mass BUS.

Classification Transfer Learning

Self Supervised Lesion Recognition For Breast Ultrasound Diagnosis

no code implementations18 Apr 2022 Yuanfan Guo, Canqian Yang, Tiancheng Lin, Chunxiao Li, Rui Zhang, Yi Xu

Since an ultrasound image only describes a partial 2D projection of a 3D lesion, such paradigm ignores the semantic relationship between different views of a lesion, which is inconsistent with the traditional diagnosis where sonographers analyze a lesion from at least two views.

Contrastive Learning

Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and Curriculum Learning

no code implementations3 Dec 2021 Shengjia Zhang, Tiancheng Lin, Yi Xu

To avoid overfitting on source domain, at the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains so that the focus of the training stage is gradually shifted from source distribution to target distribution with prediction confidence boosted on the target domain.

Pseudo Label Unsupervised Domain Adaptation

MIA-Prognosis: A Deep Learning Framework to Predict Therapy Response

1 code implementation8 Oct 2020 Jiancheng Yang, Jiajun Chen, Kaiming Kuang, Tiancheng Lin, Junjun He, Bingbing Ni

Furthermore, we experiment the proposed method on an in-house, retrospective dataset of real-world non-small cell lung cancer patients under anti-PD-1 immunotherapy.

 Ranked #1 on Text-To-Speech Synthesis on 20000 utterances (using extra training data)

Text-To-Speech Synthesis Time Series +2

Cannot find the paper you are looking for? You can Submit a new open access paper.