no code implementations • 19 Apr 2024 • Jiazhu Dai, Haoyu Sun
In order to explore the backdoor vulnerability of GCNs and create a more practical and stealthy backdoor attack method, this paper proposes a clean-graph backdoor attack against GCNs (CBAG) in the node classification task, which only poisons the training labels without any modification to the training samples, revealing that GCNs have this security vulnerability.
no code implementations • 5 Jan 2024 • Jiazhu Dai, Haoyu Sun
In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs and reveal the existence of such security vulnerability in GNN models, which make the backdoored GNN models to incorrectly predict unlinked two nodes as having a link relationship when a trigger appear.
no code implementations • 28 Feb 2023 • Jiazhu Dai, Zhipeng Xiong
A semantic backdoor attack is a new type of backdoor attack on deep neural networks (DNNs), where a naturally occurring semantic feature of samples can serve as a backdoor trigger such that the infected DNN models will misclassify testing samples containing the predefined semantic feature even without the requirement of modifying the testing samples.
1 code implementation • 28 Feb 2022 • Jiazhu Dai, Siwei Xiong
Furthermore, we propose a defense method called Hybrid Adversarial Training (HAT) against such evasion attacks.
1 code implementation • 29 Nov 2020 • Jiazhu Dai, Weifeng Zhu, Xiangfeng Luo
The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83% when using only 3 attack nodes and 6 fake nodes.
2 code implementations • 14 Oct 2020 • Jiazhu Dai, Siwei Xiong
We hope that our work will make the community aware of the threat of this attack and raise the attention given to the SCAE's security.
no code implementations • 11 Jul 2020 • Chuanshuai Chen, Jiazhu Dai
In this paper, through analyzing the changes in inner LSTM neurons, we proposed a defense method called Backdoor Keyword Identification (BKI) to mitigate backdoor attacks which the adversary performs against LSTM-based text classification by data poisoning.
no code implementations • 4 Nov 2019 • Jiazhu Dai, Le Shu
Convolutional neural networks (CNN) have become one of the most popular machine learning tools and are being applied in various tasks, however, CNN models are vulnerable to universal perturbations, which are usually human-imperceptible but can cause natural images to be misclassified with high probability.
1 code implementation • 29 May 2019 • Jiazhu Dai, Chuanshuai Chen
When the backdoor is injected, the model will misclassify any text samples that contains a specific trigger sentence into the target category determined by the adversary.
Cryptography and Security