no code implementations • 10 Jun 2022 • Nan Luo, Yuanzhang Li, Yajie Wang, Shangbo Wu, Yu-an Tan, Quanxin Zhang
Clean-label settings make the attack more stealthy due to the correct image-label pairs, but some problems still exist: first, traditional methods for poisoning training data are ineffective; second, traditional triggers are not stealthy which are still perceptible.
no code implementations • 26 Apr 2022 • Haoran Lyu, Yajie Wang, Yu-an Tan, Huipeng Zhou, Yuhang Zhao, Quanxin Zhang
Our method can mask the part input of the Mixer layer, avoid overfitting of the adversarial examples to the source model, and improve the transferability of cross-architecture.
no code implementations • 3 Jul 2021 • Yajie Wang, Shangbo Wu, Wenyi Jiang, Shengang Hao, Yu-an Tan, Quanxin Zhang
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples.