Search Results for author: Zhen Fang

Found 27 papers, 17 papers with code

On the Learnability of Out-of-distribution Detection

no code implementations7 Apr 2024 Zhen Fang, Yixuan Li, Feng Liu, Bo Han, Jie Lu

Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.

Learning Theory Out-of-Distribution Detection +2

Negative Label Guided OOD Detection with Pretrained Vision-Language Models

1 code implementation29 Mar 2024 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.

Out of Distribution (OOD) Detection

NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation

1 code implementation13 Mar 2024 Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han

Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.

Denoising

ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection

no code implementations27 Feb 2024 Bo Peng, Yadan Luo, Yonggang Zhang, Yixuan Li, Zhen Fang

Extensive experiments across OOD detection benchmarks empirically demonstrate that our proposed \textsc{ConjNorm} has established a new state-of-the-art in a variety of OOD detection setups, outperforming the current best method by up to 13. 25$\%$ and 28. 19$\%$ (FPR95) on CIFAR-100 and ImageNet-1K, respectively.

Density Estimation Out-of-Distribution Detection +1

How Does Unlabeled Data Provably Help Out-of-Distribution Detection?

1 code implementation5 Feb 2024 Xuefeng Du, Zhen Fang, Ilias Diakonikolas, Yixuan Li

Harnessing the power of unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and OOD data.

Out-of-Distribution Detection

MAG-Edit: Localized Image Editing in Complex Scenarios via Mask-Based Attention-Adjusted Guidance

no code implementations18 Dec 2023 Qi Mao, Lan Chen, YuChao Gu, Zhen Fang, Mike Zheng Shou

Recent diffusion-based image editing approaches have exhibited impressive editing capabilities in images with simple compositions.

Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources

1 code implementation NeurIPS 2023 Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, Bo Han

To this end, we suggest that generated data (with mistaken OOD generation) can be used to devise an auxiliary OOD detection task to facilitate real OOD detection.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +1

Learning to Augment Distributions for Out-of-Distribution Detection

1 code implementation NeurIPS 2023 Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yixuan Li, Bo Han

Accordingly, we propose Distributional-Augmented OOD Learning (DAL), alleviating the OOD distribution discrepancy by crafting an OOD distribution set that contains all distributions in a Wasserstein ball centered on the auxiliary OOD distribution.

Learning Theory Out-of-Distribution Detection

Continual Named Entity Recognition without Catastrophic Forgetting

1 code implementation23 Oct 2023 Duzhen Zhang, Wei Cong, Jiahua Dong, Yahan Yu, Xiuyi Chen, Yonggang Zhang, Zhen Fang

This issue is intensified in CNER due to the consolidation of old entity types from previous steps into the non-entity type at each step, leading to what is known as the semantic shift problem of the non-entity type.

Continual Named Entity Recognition named-entity-recognition +1

Invariant Learning via Probability of Sufficient and Necessary Causes

1 code implementation NeurIPS 2023 Mengyue Yang, Zhen Fang, Yonggang Zhang, Yali Du, Furui Liu, Jean-Francois Ton, Jianhong Wang, Jun Wang

To capture the information of sufficient and necessary causes, we employ a classical concept, the probability of sufficiency and necessary causes (PNS), which indicates the probability of whether one is the necessary and sufficient cause.

Meta OOD Learning for Continuously Adaptive OOD Detection

no code implementations ICCV 2023 Xinheng Wu, Jie Lu, Zhen Fang, Guangquan Zhang

To address CAOOD, we develop meta OOD learning (MOL) by designing a learning-to-adapt diagram such that a good initialized OOD detection model is learned during the training process.

Out of Distribution (OOD) Detection

Alioth: A Machine Learning Based Interference-Aware Performance Monitor for Multi-Tenancy Applications in Public Cloud

1 code implementation18 Jul 2023 Tianyao Shi, Yingxuan Yang, Yunlong Cheng, Xiaofeng Gao, Zhen Fang, Yongqiang Yang

Multi-tenancy in public clouds may lead to co-location interference on shared resources, which possibly results in performance degradation of cloud applications.

Decision Making Denoising +3

KECOR: Kernel Coding Rate Maximization for Active 3D Object Detection

no code implementations ICCV 2023 Yadan Luo, Zhuoxiao Chen, Zhen Fang, Zheng Zhang, Zi Huang, Mahsa Baktashmotlagh

Achieving a reliable LiDAR-based object detector in autonomous driving is paramount, but its success hinges on obtaining large amounts of precise 3D annotations.

3D Object Detection Active Learning +4

Continuous and Noninvasive Measurement of Arterial Pulse Pressure and Pressure Waveform using an Image-free Ultrasound System

no code implementations29 May 2023 Lirui Xu, Pang Wu, Pan Xia, Fanglin Geng, Peng Wang, Xianxiang Chen, Zhenfeng Li, Lidong Du, Shuping Liu, Li Li, Hongbo Chang, Zhen Fang

In in vitro cardiovascular phantom experiments, the results demonstrated high accuracy in the measurement of PP (error < 3 mmHg) and blood pressure waveform (root-mean-square-errors (RMSE) < 2 mmHg, correlation coefficient (r) > textgreater 0. 99).

Moderately Distributional Exploration for Domain Generalization

1 code implementation27 Apr 2023 Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian

We show that MODE can endow models with provable generalization performance on unknown target domains.

Domain Generalization

Detecting Out-of-distribution Data through In-distribution Class Prior

1 code implementation ICML 2023 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Is Out-of-Distribution Detection Learnable?

no code implementations26 Oct 2022 Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu

Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.

Learning Theory Out-of-Distribution Detection +2

Multi-class Classification with Fuzzy-feature Observations: Theory and Algorithms

1 code implementation9 Jun 2022 Guangzhi Ma, Jie Lu, Feng Liu, Zhen Fang, Guangquan Zhang

Hence, in this paper, we propose a novel framework to address a new realistic problem called multi-class classification with imprecise observations (MCIMO), where we need to train a classifier with fuzzy-feature observations.

Classification Multi-class Classification

Federated Class-Incremental Learning

1 code implementation CVPR 2022 Jiahua Dong, Lixu Wang, Zhen Fang, Gan Sun, Shichao Xu, Xiao Wang, Qi Zhu

It makes the global model suffer from significant catastrophic forgetting on old classes in real-world scenarios, where local clients often collect new classes continuously and have very limited storage memory to store old classes.

Class Incremental Learning Federated Learning +1

Confident Anchor-Induced Multi-Source Free Domain Adaptation

1 code implementation NeurIPS 2021 Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, Tongliang Liu

To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models.

Pseudo Label Source-Free Domain Adaptation +1

Learning Bounds for Open-Set Learning

1 code implementation30 Jun 2021 Zhen Fang, Jie Lu, Anjin Liu, Feng Liu, Guangquan Zhang

In this paper, we target a more challenging and realistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training.

Learning Theory Open Set Learning +1

How does the Combined Risk Affect the Performance of Unsupervised Domain Adaptation Approaches?

no code implementations30 Dec 2020 Li Zhong, Zhen Fang, Feng Liu, Jie Lu, Bo Yuan, Guangquan Zhang

Experiments show that the proxy can effectively curb the increase of the combined risk when minimizing the source risk and distribution discrepancy.

Unsupervised Domain Adaptation

Learning from a Complementary-label Source Domain: Theory and Algorithms

1 code implementation4 Aug 2020 Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu

We consider two cases of this setting, one is that the source domain only contains complementary-label data (completely complementary unsupervised domain adaptation, CC-UDA), and the other is that the source domain has plenty of complementary-label data and a small amount of true-label data (partly complementary unsupervised domain adaptation, PC-UDA).

Unsupervised Domain Adaptation

Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation

1 code implementation29 Jul 2020 Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu

To mitigate this problem, we consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA (BFUDA).

Unsupervised Domain Adaptation

Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain Adaptation

no code implementations23 Jun 2020 Li Zhong, Zhen Fang, Feng Liu, Bo Yuan, Guangquan Zhang, Jie Lu

To achieve this aim, a previous study has proven an upper bound of the target-domain risk, and the open set difference, as an important term in the upper bound, is used to measure the risk on unknown target data.

Domain Adaptation Object Recognition

Open Set Domain Adaptation: Theoretical Bound and Algorithm

1 code implementation19 Jul 2019 Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, Guangquan Zhang

The aim of unsupervised domain adaptation is to leverage the knowledge in a labeled (source) domain to improve a model's learning performance with an unlabeled (target) domain -- the basic strategy being to mitigate the effects of discrepancies between the two distributions.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.