no code implementations • 24 May 2023 • Zihui Wu, Haichang Gao, Bingqian Zhou, Ping Wang
To tackle this problem, we propose a simple but effective strategy called Adversarial Function Matching (AdvFunMatch), which aims to match distributions for all data points within the $\ell_p$-norm ball of the training data, in accordance with consistent teaching.
no code implementations • 26 Aug 2022 • Zihui Wu, Haichang Gao, Bingqian Zhou, Xiaoyan Guo, Shudong Zhang
In addition, we discuss the function of entropy in TRADES, and we find that models with high entropy can be better robustness learners.
no code implementations • 24 May 2022 • Shudong Zhang, Haichang Gao, Tianwei Zhang, Yunyi Zhou, Zihui Wu
Adversarial training (AT) has proven to be one of the most effective ways to defend Deep Neural Networks (DNNs) against adversarial attacks.
no code implementations • 29 Sep 2021 • Zihui Wu, Haichang Gao, Shudong Zhang, Yipeng Gao
Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance.