Search Results for author: Jiashuo Liu

Found 18 papers, 5 papers with code

Distributionally Generative Augmentation for Fair Facial Attribute Classification

1 code implementation11 Mar 2024 Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, Hanwang Zhang

This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.

Attribute Classification +2

Towards Robust Out-of-Distribution Generalization Bounds via Sharpness

no code implementations11 Mar 2024 Yingtian Zou, Kenji Kawaguchi, Yingnan Liu, Jiashuo Liu, Mong-Li Lee, Wynne Hsu

To bridge this gap between optimization and OOD generalization, we study the effect of sharpness on how a model tolerates data change in domain shift which is usually captured by "robustness" in generalization.

Generalization Bounds Out-of-Distribution Generalization

Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications

no code implementations8 Nov 2023 Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui

Machine learning algorithms minimizing average risk are susceptible to distributional shifts.

Understanding Prompt Tuning for V-L Models Through the Lens of Neural Collapse

no code implementations28 Jun 2023 Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu

It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.

Meta Adaptive Task Sampling for Few-Domain Generalization

no code implementations25 May 2023 Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun Zhou, Furui Liu

Moreover, we propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.

Domain Generalization

Rethinking the Evaluation Protocol of Domain Generalization

no code implementations24 May 2023 Han Yu, Xingxuan Zhang, Renzhe Xu, Jiashuo Liu, Yue He, Peng Cui

This paper examines the risks of test data information leakage from two aspects of the current evaluation protocol: supervised pretraining on ImageNet and oracle model selection.

Domain Generalization Model Selection

Exploring and Exploiting Data Heterogeneity in Recommendation

no code implementations21 May 2023 Zimu Wang, Jiashuo Liu, Hao Zou, Xingxuan Zhang, Yue He, Dongxu Liang, Peng Cui

In this work, we focus on exploring two representative categories of heterogeneity in recommendation data that is the heterogeneity of prediction mechanism and covariate distribution and propose an algorithm that explores the heterogeneity through a bilevel clustering method.

Recommendation Systems

Predictive Heterogeneity: Measures and Applications

no code implementations1 Apr 2023 Jiashuo Liu, Jiayun Wu, Bo Li, Peng Cui

As an intrinsic and fundamental property of big data, data heterogeneity exists in a variety of real-world applications, such as precision medicine, autonomous driving, financial applications, etc.

Autonomous Driving Crop Yield Prediction +3

Towards Domain Generalization in Object Detection

no code implementations27 Mar 2022 Xingxuan Zhang, Zekai Xu, Renzhe Xu, Jiashuo Liu, Peng Cui, Weitao Wan, Chong Sun, Chen Li

Despite the striking performance achieved by modern detectors when training and test data are sampled from the same or similar distribution, the generalization ability of detectors under unknown distribution shifts remains hardly studied.

Domain Generalization Object +2

Integrated Latent Heterogeneity and Invariance Learning in Kernel Space

no code implementations NeurIPS 2021 Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.

Kernelized Heterogeneous Risk Minimization

1 code implementation24 Oct 2021 Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.

Towards Out-Of-Distribution Generalization: A Survey

no code implementations31 Aug 2021 Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui

This paper represents the first comprehensive, systematic review of OOD generalization, encompassing a spectrum of aspects from problem definition, methodological development, and evaluation procedures, to the implications and future directions of the field.

Out-of-Distribution Generalization Representation Learning

Distributionally Robust Learning with Stable Adversarial Training

no code implementations30 Jun 2021 Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li

In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target.

Heterogeneous Risk Minimization

1 code implementation9 May 2021 Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen

In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts.

Stable Adversarial Learning under Distributional Shifts

no code implementations8 Jun 2020 Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li, Yishi Lin

Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data.

Triple Generative Adversarial Networks

1 code implementation20 Dec 2019 Chongxuan Li, Kun Xu, Jiashuo Liu, Jun Zhu, Bo Zhang

It is formulated as a three-player minimax game consisting of a generator, a classifier and a discriminator, and therefore is referred to as Triple Generative Adversarial Network (Triple-GAN).

Classification Conditional Image Generation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.