no code implementations • 23 Nov 2023 • Yueqi Zeng, Ziqiang Li, Pengfei Xia, Lei Liu, Bin Li
With the boom in the natural language processing (NLP) field these years, backdoor attacks pose immense threats against deep neural network models.
no code implementations • 15 Oct 2023 • Ziqiang Li, Pengfei Xia, Hong Sun, Yueqi Zeng, Wei zhang, Bin Li
In this study, we focus on improving the poisoning efficiency of backdoor attacks from the sample selection perspective.
1 code implementation • 14 Jun 2023 • Ziqiang Li, Hong Sun, Pengfei Xia, Heng Li, Beihao Xia, Yi Wu, Bin Li
However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data.
no code implementations • 14 Jun 2023 • Ziqiang Li, Hong Sun, Pengfei Xia, Beihao Xia, Xue Rui, Wei zhang, Qinglang Guo, Bin Li
This paper presents a Proxy attack-Free Strategy (PFS) designed to identify efficient poisoning samples based on individual similarity and ensemble diversity, effectively addressing the mentioned concern.
no code implementations • 14 Apr 2023 • Qingyue Yang, Hongjing Niu, Pengfei Xia, Wei zhang, Bin Li
Then, a new method that learns through multiple frequency domains is proposed.
1 code implementation • 22 Apr 2022 • Pengfei Xia, Ziqiang Li, Wei zhang, Bin Li
Recent studies have proven that deep neural networks are vulnerable to backdoor attacks.
no code implementations • 9 Nov 2021 • Pengfei Xia, Ziqiang Li, Bin Li
The most common solution for this is to compute an approximate risk by replacing the 0-1 loss with a surrogate one.
1 code implementation • 9 Nov 2021 • Pengfei Xia, Hongjing Niu, Ziqiang Li, Bin Li
Then, ML-MMDR, a difference reduction method that adds multi-level MMD regularization into the loss, is proposed, and its effectiveness is testified on three typical difference-based defense methods.
2 code implementations • 20 Mar 2021 • Ziqiang Li, Pengfei Xia, Xue Rui, Bin Li
Generative Adversarial Networks (GANs) have the ability to generate images that are visually indistinguishable from real images.
1 code implementation • 29 Aug 2020 • Pengfei Xia, Bin Li
Improving the resistance of deep neural networks against adversarial attacks is important for deploying models to realistic applications.
1 code implementation • 19 Aug 2020 • Ziqiang Li, Pengfei Xia, Rentuo Tao, Hongjing Niu, Bin Li
Quite a number of methods have been proposed to stabilize the training of GANs, the focuses of which were respectively put on the loss functions, regularization and normalization technologies, training algorithms, and model architectures.
1 code implementation • 19 Aug 2020 • Ziqiang Li, Muhammad Usman, Rentuo Tao, Pengfei Xia, Chaoyue Wang, Huanhuan Chen, Bin Li
Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey that primarily focuses on objectives and development of these methods, apart from some in-comprehensive and limited scope studies.