Search Results for author: Pengfei Xia

Found 12 papers, 7 papers with code

Efficient Trigger Word Insertion

no code implementations23 Nov 2023 Yueqi Zeng, Ziqiang Li, Pengfei Xia, Lei Liu, Bin Li

With the boom in the natural language processing (NLP) field these years, backdoor attacks pose immense threats against deep neural network models.

text-classification Text Classification

Explore the Effect of Data Selection on Poison Efficiency in Backdoor Attacks

no code implementations15 Oct 2023 Ziqiang Li, Pengfei Xia, Hong Sun, Yueqi Zeng, Wei zhang, Bin Li

In this study, we focus on improving the poisoning efficiency of backdoor attacks from the sample selection perspective.

Audio Classification Image Classification +2

Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios

1 code implementation14 Jun 2023 Ziqiang Li, Hong Sun, Pengfei Xia, Heng Li, Beihao Xia, Yi Wu, Bin Li

However, existing backdoor attack methods make unrealistic assumptions, assuming that all training data comes from a single source and that attackers have full access to the training data.

Backdoor Attack

A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks

no code implementations14 Jun 2023 Ziqiang Li, Hong Sun, Pengfei Xia, Beihao Xia, Xue Rui, Wei zhang, Qinglang Guo, Bin Li

This paper presents a Proxy attack-Free Strategy (PFS) designed to identify efficient poisoning samples based on individual similarity and ensemble diversity, effectively addressing the mentioned concern.

Backdoor Attack

Data-Efficient Backdoor Attacks

1 code implementation22 Apr 2022 Pengfei Xia, Ziqiang Li, Wei zhang, Bin Li

Recent studies have proven that deep neural networks are vulnerable to backdoor attacks.

Enhancing Backdoor Attacks with Multi-Level MMD Regularization

1 code implementation9 Nov 2021 Pengfei Xia, Hongjing Niu, Ziqiang Li, Bin Li

Then, ML-MMDR, a difference reduction method that adds multi-level MMD regularization into the loss, is proposed, and its effectiveness is testified on three typical difference-based defense methods.

Backdoor Attack

Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search

no code implementations9 Nov 2021 Pengfei Xia, Ziqiang Li, Bin Li

The most common solution for this is to compute an approximate risk by replacing the 0-1 loss with a surrogate one.

Adversarial Robustness AutoML

Exploring The Effect of High-frequency Components in GANs Training

2 code implementations20 Mar 2021 Ziqiang Li, Pengfei Xia, Xue Rui, Bin Li

Generative Adversarial Networks (GANs) have the ability to generate images that are visually indistinguishable from real images.

Vocal Bursts Intensity Prediction

Improving Resistance to Adversarial Deformations by Regularizing Gradients

1 code implementation29 Aug 2020 Pengfei Xia, Bin Li

Improving the resistance of deep neural networks against adversarial attacks is important for deploying models to realistic applications.

A Systematic Survey of Regularization and Normalization in GANs

1 code implementation19 Aug 2020 Ziqiang Li, Muhammad Usman, Rentuo Tao, Pengfei Xia, Chaoyue Wang, Huanhuan Chen, Bin Li

Although a handful number of regularization and normalization methods have been proposed for GANs, to the best of our knowledge, there exists no comprehensive survey that primarily focuses on objectives and development of these methods, apart from some in-comprehensive and limited scope studies.

Data Augmentation

A New Perspective on Stabilizing GANs training: Direct Adversarial Training

1 code implementation19 Aug 2020 Ziqiang Li, Pengfei Xia, Rentuo Tao, Hongjing Niu, Bin Li

Quite a number of methods have been proposed to stabilize the training of GANs, the focuses of which were respectively put on the loss functions, regularization and normalization technologies, training algorithms, and model architectures.

Adversarial Attack Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.