1 code implementation • 23 Mar 2024 • Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen
The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting.
no code implementations • 11 Mar 2024 • Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
Foundation models are usually pre-trained on large-scale datasets and then adapted to downstream tasks through tuning.
1 code implementation • 20 Feb 2024 • Hongxin Wei, Jianguo Huang
TorchCP is a Python toolbox for conformal prediction research on deep learning models.
no code implementations • 8 Feb 2024 • Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei
In this paper, we propose to treat the learning complexity (LC) as the scoring function for classification and regression tasks.
no code implementations • 8 Feb 2024 • Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.
no code implementations • 7 Feb 2024 • Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei
Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbots, to name a few.
no code implementations • 6 Feb 2024 • Huajun Xi, Jianguo Huang, Lei Feng, Hongxin Wei
Conformal prediction, as an emerging uncertainty qualification technique, constructs prediction sets that are guaranteed to contain the true label with high probability.
no code implementations • 15 Nov 2023 • Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu
Coreset selection is powerful in reducing computational costs and accelerating data processing for deep learning algorithms.
1 code implementation • 28 Oct 2023 • Shuoyuan Wang, Jindong Wang, Huajun Xi, Bob Zhang, Lei Zhang, Hongxin Wei
However, the high computational cost of optimization-based TTA algorithms makes it intractable to run on resource-constrained edge devices.
2 code implementations • 10 Oct 2023 • Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei
In this paper, we empirically and theoretically show that disregarding the probabilities' value will mitigate the undesirable effect of miscalibrated probability values.
no code implementations • 29 Sep 2023 • Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
1 code implementation • 12 Jun 2023 • Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng
In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.
1 code implementation • 3 Jun 2023 • Wenyu Jiang, Hao Cheng, Mingcai Chen, Chongjun Wang, Hongxin Wei
Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world.
no code implementations • 18 Mar 2023 • Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng
In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.
no code implementations • 8 Dec 2022 • Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.
3 code implementations • 17 Jun 2022 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
1 code implementation • 30 May 2022 • Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, Zhiping Lin
Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively.
2 code implementations • 19 May 2022 • Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li
Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
3 code implementations • 16 Jan 2022 • Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An
Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.
no code implementations • 17 Oct 2021 • Ziqi Zhang, Yuexiang Li, Hongxin Wei, Kai Ma, Tao Xu, Yefeng Zheng
The hard samples, which are beneficial for classifier learning, are often mistakenly treated as noises in such a setting since both the hard samples and ones with noisy labels lead to a relatively larger loss value than the easy cases.
no code implementations • 29 Sep 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
4 code implementations • NeurIPS 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Bo An
Learning with noisy labels is a practically challenging problem in weakly supervised learning.
no code implementations • 23 Dec 2020 • Rundong Wang, Hongxin Wei, Bo An, Zhouyan Feng, Jun Yao
Portfolio management via reinforcement learning is at the forefront of fintech research, which explores how to optimally reallocate a fund into different financial assets over the long term by trial-and-error.
no code implementations • 9 Dec 2020 • Hongxin Wei, Lei Feng, Rundong Wang, Bo An
Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.
2 code implementations • CVPR 2020 • Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An
The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.
Ranked #10 on Learning with noisy labels on CIFAR-10N-Random3