1 code implementation • 19 Mar 2024 • Tao Li, Pan Zhou, Zhengbao He, Xinwen Cheng, Xiaolin Huang
By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance.
no code implementations • 23 Feb 2024 • Xinwen Cheng, Zhehao Huang, Xiaolin Huang
Machine Unlearning (MU) is to forget data from a well-trained model, which is practically important due to the "right to be forgotten".
no code implementations • 26 Oct 2023 • Yingwen Wu, Tao Li, Xinwen Cheng, Jie Yang, Xiaolin Huang
To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection.
1 code implementation • 22 Nov 2022 • Sizhe Chen, Geng Yuan, Xinwen Cheng, Yifan Gong, Minghai Qin, Yanzhi Wang, Xiaolin Huang
In this paper, we uncover them by model checkpoints' gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints' cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures.
no code implementations • 27 Sep 2022 • Zhixing Ye, Xinwen Cheng, Xiaolin Huang
Deep Neural Networks (DNNs) are susceptible to elaborately designed perturbations, whether such perturbations are dependent or independent of images.