Search Results for author: Haichang Gao

Found 4 papers, 0 papers with code

AdvFunMatch: When Consistent Teaching Meets Adversarial Robustness

no code implementations24 May 2023 Zihui Wu, Haichang Gao, Bingqian Zhou, Ping Wang

To tackle this problem, we propose a simple but effective strategy called Adversarial Function Matching (AdvFunMatch), which aims to match distributions for all data points within the $\ell_p$-norm ball of the training data, in accordance with consistent teaching.

Adversarial Robustness Knowledge Distillation

Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training

no code implementations26 Aug 2022 Zihui Wu, Haichang Gao, Bingqian Zhou, Xiaoyan Guo, Shudong Zhang

In addition, we discuss the function of entropy in TRADES, and we find that models with high entropy can be better robustness learners.

Adversarial Robustness

Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization

no code implementations24 May 2022 Shudong Zhang, Haichang Gao, Tianwei Zhang, Yunyi Zhou, Zihui Wu

Adversarial training (AT) has proven to be one of the most effective ways to defend Deep Neural Networks (DNNs) against adversarial attacks.

Understanding the robustness-accuracy tradeoff by rethinking robust fairness

no code implementations29 Sep 2021 Zihui Wu, Haichang Gao, Shudong Zhang, Yipeng Gao

Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.