1 code implementation • 14 Dec 2023 • Yifan Zhu, Lijia Yu, Xiao-Shan Gao
Detectability of unlearnable examples with simple networks motivates us to design a novel defense method.
no code implementations • 29 Jun 2023 • Yihan Wang, Lijia Yu, Xiao-Shan Gao
Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks.
no code implementations • 17 Jul 2022 • Xiao-Shan Gao, Shuang Liu, Lijia Yu
Game theory has been used to answer some of the basic questions about adversarial deep learning such as the existence of a classifier with optimal robustness and the existence of optimal adversarial samples for a given class of classifiers.
no code implementations • 20 Mar 2022 • Lijia Yu, Yihan Wang, Xiao-Shan Gao
In this paper, a new parameter perturbation attack on DNNs, called adversarial parameter attack, is proposed, in which small perturbations to the parameters of the DNN are made such that the accuracy of the attacked DNN does not decrease much, but its robustness becomes much lower.
no code implementations • 8 Nov 2021 • Lijia Yu, Xiao-Shan Gao
The work is motivated by the fact that the bias part is a piecewise constant function with zero gradient and hence cannot be directly attacked by gradient-based methods to generate adversaries, such as FGSM.
1 code implementation • 30 Jun 2021 • Lijia Yu, Xiao-Shan Gao
In this paper, a robust classification-autoencoder (CAE) is proposed, which has strong ability to recognize outliers and defend adversaries.
no code implementations • 10 Oct 2020 • Lijia Yu, Xiao-Shan Gao
A lower bound for the robustness measure is given in terms of the $L_{2,\infty}$ norm.