1 code implementation • CVPR 2022 • Yifeng Xiong, Jiadong Lin, Min Zhang, John E. Hopcroft, Kun He
The black-box adversarial attack has attracted impressive attention for its practical use in the field of deep learning security.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 29 Sep 2021 • Jiadong Lin, Yifeng Xiong, Min Zhang, John E. Hopcroft, Kun He
Black-box adversarial attack has attracted much attention for its practical use in deep learning applications, and it is very challenging as there is no access to the architecture and weights of the target model.
1 code implementation • 19 Mar 2021 • Xiaosen Wang, Jiadong Lin, Han Hu, Jingdong Wang, Kun He
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
1 code implementation • ICLR 2020 • Chuanbiao Song, Kun He, Jiadong Lin, Li-Wei Wang, John E. Hopcroft
We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples.
3 code implementations • ICLR 2020 • Jiadong Lin, Chuanbiao Song, Kun He, Li-Wei Wang, John E. Hopcroft
While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples.