no code implementations • 5 Oct 2023 • Jie-Jing Shao, Jiang-Xin Shi, Xiao-Wen Yang, Lan-Zhe Guo, Yu-Feng Li
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts, enabling zero-shot recognition on downstream tasks.
1 code implementation • 18 Sep 2023 • Jiang-Xin Shi, Tong Wei, Zhi Zhou, Xin-Yan Han, Jie-Jing Shao, Yu-Feng Li
In this paper, we propose PEL, a fine-tuning method that can effectively adapt pre-trained models to long-tailed recognition tasks in fewer than 20 epochs without the need for extra data.
Ranked #1 on Long-tail Learning on CIFAR-100-LT (ρ=10) (using extra training data)
Fine-Grained Image Classification Long-tail learning with class descriptors
4 code implementations • 8 Oct 2022 • Tong Wei, Zhen Mao, Jiang-Xin Shi, Yu-Feng Li, Min-Ling Zhang
Multi-label learning has attracted significant attention from both academic and industry field in recent decades.
4 code implementations • 12 Aug 2022 • Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, RenJie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
1 code implementation • 9 Aug 2022 • Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Yu-Feng Li
The second part shows the usage of LAMDA-SSL by abundant examples in detail.
no code implementations • 12 Feb 2022 • Lan-Zhe Guo, Zhi Zhou, Yu-Feng Li
Semi-supervised learning (SSL) is the branch of machine learning that aims to improve learning performance by leveraging unlabeled data when labels are insufficient.
no code implementations • NeurIPS 2021 • Zhi Zhou, Lan-Zhe Guo, Zhanzhan Cheng, Yu-Feng Li, ShiLiang Pu
However, in many real-world applications, it is desirable to have SSL algorithms that not only classify the samples drawn from the same distribution of labeled data but also detect out-of-distribution (OOD) samples drawn from an unknown distribution.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 22 Oct 2021 • Tong Wei, Jiang-Xin Shi, Yu-Feng Li, Min-Ling Zhang
Deep neural networks have been shown to be very powerful methods for many supervised learning tasks.
no code implementations • 1 Sep 2021 • Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, Rong Jin
In this work we develop a simple yet powerful framework, whose key idea is to select a subset of training examples from the unlabeled data when performing existing SSL methods so that only the unlabeled examples with pseudo labels related to the labeled data will be used to train models.
no code implementations • 26 Aug 2021 • Tong Wei, Jiang-Xin Shi, Wei-Wei Tu, Yu-Feng Li
To overcome this limitation, we establish a new prototypical noise detection method by designing a distance-based metric that is resistant to label noise.
Ranked #25 on Image Classification on mini WebVision 1.0
no code implementations • ICCV 2021 • Zhi-Fan Wu, Tong Wei, Jianwen Jiang, Chaojie Mao, Mingqian Tang, Yu-Feng Li
The existence of noisy data is prevalent in both the training and testing phases of machine learning systems, which inevitably leads to the degradation of model performance.
Ranked #18 on Image Classification on mini WebVision 1.0
no code implementations • 1 Jan 2021 • Tong Wei, Wei-Wei Tu, Yu-Feng Li
Extreme multi-label learning (XML) works to annotate objects with relevant labels from an extremely large label set.
no code implementations • 19 Jan 2020 • Lan-Zhe Guo, Feng Kuang, Zhang-Xun Liu, Yu-Feng Li, Nan Ma, Xiao-Hu Qie
For example, in user experience enhancement from Didi, one of the largest online ride-sharing platforms, the ride comment data contains severe label noise (due to the subjective factors of passengers) and severe label distribution bias (due to the sampling bias).
no code implementations • 22 Apr 2019 • Lan-Zhe Guo, Yu-Feng Li, Ming Li, Jin-Feng Yi, Bo-Wen Zhou, Zhi-Hua Zhou
We guide the optimization of label quality through a small amount of validation data, and to ensure the safeness of performance while maximizing performance gain.
no code implementations • 28 Nov 2018 • Guoxin Fan, Huaqing Liu, Zhenhua Wu, Yu-Feng Li, Chaobo Feng, Dongdong Wang, Jie Luo, Xiaofei Guan, William M. Wells III, Shisheng He
Pixel accuracy, IoU, and Dice score are used to assess the segmentation performance of lumbosacral structures.
no code implementations • 6 Mar 2013 • Yu-Feng Li, Ivor W. Tsang, James T. Kwok, Zhi-Hua Zhou
In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete.
no code implementations • NeurIPS 2012 • Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou
Both random Fourier features and the Nyström method have been successfully applied to efficient kernel learning.