Search Results for author: Quanxin Zhang

Found 3 papers, 0 papers with code

Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

no code implementations10 Jun 2022 Nan Luo, Yuanzhang Li, Yajie Wang, Shangbo Wu, Yu-an Tan, Quanxin Zhang

Clean-label settings make the attack more stealthy due to the correct image-label pairs, but some problems still exist: first, traditional methods for poisoning training data are ineffective; second, traditional triggers are not stealthy which are still perceptible.

Backdoor Attack backdoor defense +1

Boosting Adversarial Transferability of MLP-Mixer

no code implementations26 Apr 2022 Haoran Lyu, Yajie Wang, Yu-an Tan, Huipeng Zhou, Yuhang Zhao, Quanxin Zhang

Our method can mask the part input of the Mixer layer, avoid overfitting of the adversarial examples to the source model, and improve the transferability of cross-architecture.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.