Search Results for author: Hongxin Wei

Found 26 papers, 13 papers with code

G-ACIL: Analytic Learning for Exemplar-Free Generalized Class Incremental Learning

1 code implementation23 Mar 2024 Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen

The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution, leading to intensified forgetting.

Class Incremental Learning Incremental Learning

Learning with Noisy Foundation Models

no code implementations11 Mar 2024 Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj

Foundation models are usually pre-trained on large-scale datasets and then adapted to downstream tasks through tuning.

TorchCP: A Library for Conformal Prediction based on PyTorch

1 code implementation20 Feb 2024 Hongxin Wei, Jianguo Huang

TorchCP is a Python toolbox for conformal prediction research on deep learning models.

Conformal Prediction regression

Exploring Learning Complexity for Downstream Data Pruning

no code implementations8 Feb 2024 Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei

In this paper, we propose to treat the learning complexity (LC) as the scoring function for classification and regression tasks.

Informativeness

Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

no code implementations8 Feb 2024 Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei

In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.

Open-Vocabulary Calibration for Vision-Language Models

no code implementations7 Feb 2024 Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei

Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbots, to name a few.

Does Confidence Calibration Help Conformal Prediction?

no code implementations6 Feb 2024 Huajun Xi, Jianguo Huang, Lei Feng, Hongxin Wei

Conformal prediction, as an emerging uncertainty qualification technique, constructs prediction sets that are guaranteed to contain the true label with high probability.

Conformal Prediction

Refined Coreset Selection: Towards Minimal Coreset Size under Model Performance Constraints

no code implementations15 Nov 2023 Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu

Coreset selection is powerful in reducing computational costs and accelerating data processing for deep learning algorithms.

Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition

1 code implementation28 Oct 2023 Shuoyuan Wang, Jindong Wang, Huajun Xi, Bob Zhang, Lei Zhang, Hongxin Wei

However, the high computational cost of optimization-based TTA algorithms makes it intractable to run on resource-constrained edge devices.

Computational Efficiency Human Activity Recognition +2

Conformal Prediction for Deep Classifier via Label Ranking

2 code implementations10 Oct 2023 Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei

In this paper, we empirically and theoretically show that disregarding the probabilities' value will mitigate the undesirable effect of miscalibrated probability values.

Conformal Prediction

Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks

no code implementations29 Sep 2023 Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj

This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.

A Generalized Unbiased Risk Estimator for Learning with Augmented Classes

1 code implementation12 Jun 2023 Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng

In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.

Multi-class Classification

DOS: Diverse Outlier Sampling for Out-of-Distribution Detection

1 code implementation3 Jun 2023 Wenyu Jiang, Hao Cheng, Mingcai Chen, Chongjun Wang, Hongxin Wei

Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world.

Out-of-Distribution Detection

CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning

no code implementations18 Mar 2023 Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng

In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.

Partial Label Learning Weakly-supervised Learning

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction

no code implementations8 Dec 2022 Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li

In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.

Memorization

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

3 code implementations17 Jun 2022 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Mitigating Neural Network Overconfidence with Logit Normalization

2 code implementations19 May 2022 Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li

Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

GearNet: Stepwise Dual Learning for Weakly Supervised Domain Adaptation

3 code implementations16 Jan 2022 Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An

Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.

Domain Adaptation

Alleviating Noisy-label Effects in Image Classification via Probability Transition Matrix

no code implementations17 Oct 2021 Ziqi Zhang, Yuexiang Li, Hongxin Wei, Kai Ma, Tao Xu, Yefeng Zheng

The hard samples, which are beneficial for classifier learning, are often mistakenly treated as noises in such a setting since both the hard samples and ones with noisy labels lead to a relatively larger loss value than the easy cases.

Image Classification

Open-sampling: Re-balancing Long-tailed Datasets with Out-of-Distribution Data

no code implementations29 Sep 2021 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Deep Stock Trading: A Hierarchical Reinforcement Learning Framework for Portfolio Optimization and Order Execution

no code implementations23 Dec 2020 Rundong Wang, Hongxin Wei, Bo An, Zhouyan Feng, Jun Yao

Portfolio management via reinforcement learning is at the forefront of fintech research, which explores how to optimally reallocate a fund into different financial assets over the long term by trial-and-error.

Hierarchical Reinforcement Learning Management +2

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

no code implementations9 Dec 2020 Hongxin Wei, Lei Feng, Rundong Wang, Bo An

Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.

Meta-Learning

Combating noisy labels by agreement: A joint training method with co-regularization

2 code implementations CVPR 2020 Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An

The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.

Learning with noisy labels Weakly-supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.