Search Results for author: Jiazhu Dai

Found 8 papers, 4 papers with code

A backdoor attack against link prediction tasks with graph neural networks

no code implementations5 Jan 2024 Jiazhu Dai, Haoyu Sun

In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs and reveal the existence of such security vulnerability in GNN models, which make the backdoored GNN models to incorrectly predict unlinked two nodes as having a link relationship when a trigger appear.

Backdoor Attack Graph Classification +2

A semantic backdoor attack against Graph Convolutional Networks

no code implementations28 Feb 2023 Jiazhu Dai, Zhipeng Xiong

A semantic backdoor attack is a new type of backdoor attack on deep neural networks (DNNs), where a naturally occurring semantic feature of samples can serve as a backdoor trigger such that the infected DNN models will misclassify testing samples containing the predefined semantic feature even without the requirement of modifying the testing samples.

Backdoor Attack Graph Classification +1

Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training

1 code implementation28 Feb 2022 Jiazhu Dai, Siwei Xiong

Furthermore, we propose a defense method called Hybrid Adversarial Training (HAT) against such evasion attacks.

A Targeted Universal Attack on Graph Convolutional Network

1 code implementation29 Nov 2020 Jiazhu Dai, Weifeng Zhu, Xiangfeng Luo

The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83% when using only 3 attack nodes and 6 fake nodes.

Adversarial Attack

An Evasion Attack against Stacked Capsule Autoencoder

2 code implementations14 Oct 2020 Jiazhu Dai, Siwei Xiong

We hope that our work will make the community aware of the threat of this attack and raise the attention given to the SCAE's security.

Adversarial Attack Image Classification

Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification

no code implementations11 Jul 2020 Chuanshuai Chen, Jiazhu Dai

In this paper, through analyzing the changes in inner LSTM neurons, we proposed a defense method called Backdoor Keyword Identification (BKI) to mitigate backdoor attacks which the adversary performs against LSTM-based text classification by data poisoning.

Data Poisoning General Classification +2

Fast-UAP: An Algorithm for Speeding up Universal Adversarial Perturbation Generation with Orientation of Perturbation Vectors

no code implementations4 Nov 2019 Jiazhu Dai, Le Shu

Convolutional neural networks (CNN) have become one of the most popular machine learning tools and are being applied in various tasks, however, CNN models are vulnerable to universal perturbations, which are usually human-imperceptible but can cause natural images to be misclassified with high probability.

A backdoor attack against LSTM-based text classification systems

1 code implementation29 May 2019 Jiazhu Dai, Chuanshuai Chen

When the backdoor is injected, the model will misclassify any text samples that contains a specific trigger sentence into the target category determined by the adversary.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.