Search Results for author: Zhengbao He

Found 5 papers, 3 papers with code

Friendly Sharpness-Aware Minimization

1 code implementation19 Mar 2024 Tao Li, Pan Zhou, Zhengbao He, Xinwen Cheng, Xiaolin Huang

By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance.

Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective

no code implementations23 Feb 2023 Zhengbao He, Tao Li, Sizhe Chen, Xiaolin Huang

Based on self-fitting, we provide new insights into the existing methods to mitigate CO and extend CO to multi-step adversarial training.

Self-Learning

Trainable Weight Averaging: A General Approach for Subspace Training

1 code implementation26 May 2022 Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin

Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.

Dimensionality Reduction Efficient Neural Network +3

Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet

no code implementations16 Jan 2020 Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang

AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.

Adversarial Attack

DAmageNet: A Universal Adversarial Dataset

1 code implementation16 Dec 2019 Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun

Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.