no code implementations • 4 Mar 2020 • Chengjin Sun, Sizhe Chen, Jia Cai, Xiaolin Huang
To implement the Type I attack, we destroy the original one by increasing the distance in input space while keeping the output similar because different inputs may correspond to similar features for the property of deep neural network.
no code implementations • 4 Mar 2020 • Chengjin Sun, Sizhe Chen, Xiaolin Huang
We restrict the gradient from the reconstruction image to the original one so that the autoencoder is not sensitive to trivial perturbation produced by the adversarial attack.
no code implementations • 21 Jan 2020 • Zhixing Ye, Sizhe Chen, Peidong Zhang, Chengjin Sun, Xiaolin Huang
Adversarial attacks have long been developed for revealing the vulnerability of Deep Neural Networks (DNNs) by adding imperceptible perturbations to the input.
no code implementations • 16 Jan 2020 • Sizhe Chen, Zhengbao He, Chengjin Sun, Jie Yang, Xiaolin Huang
AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss.
1 code implementation • 16 Dec 2019 • Sizhe Chen, Xiaolin Huang, Zhengbao He, Chengjin Sun
Adversarial samples are similar to the clean ones, but are able to cheat the attacked DNN to produce incorrect predictions in high confidence.
no code implementations • 3 Sep 2018 • Sanli Tang, Xiaolin Huang, Mingjian Chen, Chengjin Sun, Jie Yang
Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations.