1 code implementation • 24 Jul 2023 • Xuelong Dai, Kaisheng Liang, Bin Xiao
Unrestricted adversarial attacks present a serious threat to deep learning models and adversarial defense techniques.
1 code implementation • CVPR 2023 • Kaisheng Liang, Bin Xiao
Our method can prevent adversarial examples from using non-robust style features and help generate transferable perturbations.