no code implementations • 7 Dec 2023 • Yuefeng Peng, Ali Naseh, Amir Houmansadr
A unique feature of our defense is that it works on input samples only, without modifying the training or inference phase of the target model.
no code implementations • 4 Jan 2022 • Hui Liu, Bo Zhao, Yuefeng Peng, Weidong Li, Peng Liu
Experimental results show that the contribution of image transformations to adversarial detection is significantly different, the combination of them can significantly improve the generic detection ability against state-of-the-art adversarial attacks.
no code implementations • 19 Jul 2021 • Hui Liu, Bo Zhao, Minzhi Ji, Yuefeng Peng, Jiabao Guo, Peng Liu
In this paper, we reveal that imperceptible adversarial examples are the product of recessive features misleading neural networks, and an adversarial attack is essentially a kind of method to enrich these recessive features in the image.