no code implementations • 6 May 2024 • Jose Blanchet, Peng Cui, Jiajin Li, Jiashuo Liu
Empirically, we validate the practical utility of our stability evaluation criterion across a host of real-world applications.
1 code implementation • 11 Mar 2024 • Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, Hanwang Zhang
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
no code implementations • 11 Mar 2024 • Yingtian Zou, Kenji Kawaguchi, Yingnan Liu, Jiashuo Liu, Mong-Li Lee, Wynne Hsu
To bridge this gap between optimization and OOD generalization, we study the effect of sharpness on how a model tolerates data change in domain shift which is usually captured by "robustness" in generalization.
no code implementations • 4 Mar 2024 • Han Yu, Jiashuo Liu, Xingxuan Zhang, Jiayun Wu, Peng Cui
In closing, we propose several promising directions for future research in OOD evaluation.
no code implementations • 8 Nov 2023 • Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui
Machine learning algorithms minimizing average risk are susceptible to distributional shifts.
no code implementations • 28 Jun 2023 • Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu
It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.
no code implementations • 25 May 2023 • Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun Zhou, Furui Liu
Moreover, we propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
no code implementations • 24 May 2023 • Han Yu, Xingxuan Zhang, Renzhe Xu, Jiashuo Liu, Yue He, Peng Cui
This paper examines the risks of test data information leakage from two aspects of the current evaluation protocol: supervised pretraining on ImageNet and oracle model selection.
no code implementations • 21 May 2023 • Zimu Wang, Jiashuo Liu, Hao Zou, Xingxuan Zhang, Yue He, Dongxu Liang, Peng Cui
In this work, we focus on exploring two representative categories of heterogeneity in recommendation data that is the heterogeneity of prediction mechanism and covariate distribution and propose an algorithm that explores the heterogeneity through a bilevel clustering method.
no code implementations • 1 Apr 2023 • Jiashuo Liu, Jiayun Wu, Bo Li, Peng Cui
As an intrinsic and fundamental property of big data, data heterogeneity exists in a variety of real-world applications, such as precision medicine, autonomous driving, financial applications, etc.
1 code implementation • 7 Jun 2022 • Jiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui
shifts in prediction mechanisms ($Y|X$-shifts).
no code implementations • 27 Mar 2022 • Xingxuan Zhang, Zekai Xu, Renzhe Xu, Jiashuo Liu, Peng Cui, Weitao Wan, Chong Sun, Chen Li
Despite the striking performance achieved by modern detectors when training and test data are sampled from the same or similar distribution, the generalization ability of detectors under unknown distribution shifts remains hardly studied.
no code implementations • NeurIPS 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
1 code implementation • 24 Oct 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
no code implementations • 31 Aug 2021 • Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui
This paper represents the first comprehensive, systematic review of OOD generalization, encompassing a spectrum of aspects from problem definition, methodological development, and evaluation procedures, to the implications and future directions of the field.
no code implementations • 30 Jun 2021 • Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li
In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target.
1 code implementation • 9 May 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts.
no code implementations • 8 Jun 2020 • Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li, Yishi Lin
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data.
1 code implementation • 20 Dec 2019 • Chongxuan Li, Kun Xu, Jiashuo Liu, Jun Zhu, Bo Zhang
It is formulated as a three-player minimax game consisting of a generator, a classifier and a discriminator, and therefore is referred to as Triple Generative Adversarial Network (Triple-GAN).