no code implementations • 5 Feb 2024 • Haibo Jin, Ruoxi Chen, Andy Zhou, Jinyin Chen, Yang Zhang, Haohan Wang
Our system of different roles will leverage this knowledge graph to generate new jailbreaks, which have proved effective in inducing LLMs to generate unethical or guideline-violating responses.
no code implementations • 17 Aug 2023 • Jinyin Chen, Jie Ge, Shilian Zheng, Linhui Ye, Haibin Zheng, Weiguo Shen, Keqiang Yue, Xiaoniu Yang
It can also be found that the DeepReceiver is vulnerable to adversarial perturbations even with very low power and limited PAPR.
no code implementations • 18 Jul 2023 • Haibin Zheng, Jinyin Chen, Haibo Jin
Therefore, it is crucial to identify the misbehavior of DNN-based software and improve DNNs' quality.
no code implementations • 25 Mar 2023 • Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng
To address the issues, we introduce the concept of local gradient, and reveal that adversarial examples have a quite larger bound of local gradient than the benign ones.
1 code implementation • 22 Mar 2023 • Jinyin Chen, Haibin Zheng, Tao Liu, Rongchang Li, Yao Cheng, Xuhong Zhang, Shouling Ji
With the development of deep learning processors and accelerators, deep learning models have been widely deployed on edge devices as part of the Internet of Things.
no code implementations • 18 Mar 2023 • Jinyin Chen, Mingjun Li, Haibin Zheng
For the first time, we formalize the problem of copyright protection for FL, and propose FedRight to protect model copyright based on model fingerprints, i. e., extracting model features by generating adversarial examples as model fingerprints.
1 code implementation • 25 Oct 2022 • Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, Guohan Huang
Most of the proposed studies launch the backdoor attack using a trigger that either is the randomly generated subgraph (e. g., erd\H{o}s-r\'enyi backdoor) for less computational burden, or the gradient-based generative subgraph (e. g., graph trojaning attack) to enable a more effective attack.
2 code implementations • USENIX Security 22 2022 • Chong Fu, Xuhong Zhang, Shouling Ji, Jinyin Chen, Jingzheng Wu, Shanqing Guo, Jun Zhou, Alex X. Liu, Ting Wang
However, we discover that the bottom model structure and the gradient update mechanism of VFL can be exploited by a malicious participant to gain the power to infer the privately owned labels.
1 code implementation • 14 Aug 2022 • Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, Jinyin Chen
Consequently, the link prediction model trained on the backdoored dataset will predict the link with trigger to the target state.
no code implementations • 17 Jun 2022 • Jinyin Chen, Chengyu Jia, Haibin Zheng, Ruoxi Chen, Chenbo Fu
The proliferation of fake news and its serious negative social influence push fake news detection methods to become necessary tools for web managers.
1 code implementation • 11 Jun 2022 • Jinyin Chen, Mingjun Li, Tao Liu, Haibin Zheng, Yao Cheng, Changting Lin
To address these challenges, we reconsider the defense from a novel perspective, i. e., model weight evolving frequency. Empirically, we gain a novel insight that during the FL's training, the model weight evolving frequency of free-riders and that of benign clients are significantly different.
no code implementations • Findings (ACL) 2022 • Bin Zhu, Zhaoquan Gu, Le Wang, Jinyin Chen, Qi Xuan
On top of FADA, we propose geometry-aware adversarial training (GAT) to perform adversarial training on friendly adversarial data so that we can save a large number of search steps.
1 code implementation • 5 Apr 2022 • Jinyin Chen, Shulong Hu, Haibin Zheng, Changyou Xing, Guomin Zhang
Addressing the challenges, for the first time, we introduce expert knowledge to guide the agent to make better decisions in RL-based PT and propose a Generative Adversarial Imitation Learning-based generic intelligent Penetration testing framework, denoted as GAIL-PT, to solve the problems of higher labor costs due to the involvement of security experts and high-dimensional discrete action space.
1 code implementation • 12 Feb 2022 • Haibo Jin, Ruoxi Chen, Haibin Zheng, Jinyin Chen, Yao Cheng, Yue Yu, Xianglong Liu
By maximizing the number of excitable neurons concerning various wrong behaviors of models, DeepSensor can generate testing examples that effectively trigger more errors due to adversarial inputs, polluted data and incomplete training.
1 code implementation • 25 Dec 2021 • Haibin Zheng, Zhiqing Chen, Tianyu Du, Xuhong Zhang, Yao Cheng, Shouling Ji, Jingyi Wang, Yue Yu, Jinyin Chen
To overcome the challenges, we propose NeuronFair, a new DNN fairness testing framework that differs from previous work in several key aspects: (1) interpretable - it quantitatively interprets DNNs' fairness violations for the biased decision; (2) effective - it uses the interpretation results to guide the generation of more diverse instances in less time; (3) generic - it can handle both structured and unstructured data.
no code implementations • 24 Dec 2021 • Haibo Jin, Ruoxi Chen, Jinyin Chen, Yao Cheng, Chong Fu, Ting Wang, Yue Yu, Zhaoyan Ming
Existing DNN testing methods are mainly designed to find incorrect corner case behaviors in adversarial settings but fail to discover the backdoors crafted by strong trojan attacks.
no code implementations • 24 Dec 2021 • Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng, Yue Yu, Shouling Ji
From the perspective of image feature space, some of them cannot reach satisfying results due to the shift of features.
no code implementations • 26 Nov 2021 • Jinyin Chen, Haiyang Xiong, Dunjie Zhang, Zhenguang Liu, Jiajing Wu
Phishing detectors direct their efforts in hunting phishing addresses.
1 code implementation • 24 Nov 2021 • Yao Lu, Wen Yang, Yunzhe Zhang, Zuohui Chen, Jinyin Chen, Qi Xuan, Zhen Wang, Xiaoniu Yang
Specifically, we model the process of class separation of intermediate representations in pre-trained DNNs as the evolution of communities in dynamic graphs.
1 code implementation • 13 Oct 2021 • Jinyin Chen, Guohan Huang, Haibin Zheng, Shanqing Yu, Wenrong Jiang, Chen Cui
This is the first study of adversarial attacks on GVFL.
no code implementations • 8 Oct 2021 • Jinyin Chen, Haiyang Xiong, Haibin Zheng, Jian Zhang, Guodong Jiang, Yi Liu
Backdoor attacks induce the DLP methods to make wrong prediction by the malicious training data, i. e., generating a subgraph sequence as the trigger and embedding it to the training data.
no code implementations • 19 Aug 2021 • Dunjie Zhang, Jinyin Chen
The transaction pattern graphs and MCGC are more able to detect potential phishing scammers by extracting the transaction pattern features of the target users.
1 code implementation • 16 Jul 2021 • Jinyin Chen, Haiyang Xiong, Haibin Zhenga, Dunjie Zhang, Jian Zhang, Mingwei Jia, Yi Liu
To achieve lower-complexity defense applied to graph classification models, EGC2 utilizes a centrality-based edge-importance index to compress the graphs, filtering out trivial structures and adversarial perturbations in the input graphs, thus improving the model's robustness.
1 code implementation • 14 May 2021 • Jinyin Chen, Ruoxi Chen, Haibin Zheng, Zhaoyan Ming, Wenrong Jiang, Chen Cui
Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF).
no code implementations • 24 Feb 2021 • Jinyin Chen, Xiang Lin, Dunjie Zhang, Wenrong Jiang, Guohan Huang, Hui Xiong, Yun Xiang
To the best of our knowledge, this is the first targeted label attack technique.
1 code implementation • 18 Jan 2021 • Jinyin Chen, Dunjie Zhang, Zhaoyan Ming, Kejie Huang, Wenrong Jiang, Chen Cui
To address this problem, we propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
no code implementations • 6 Jan 2021 • Jinyin Chen, Longyuan Zhang, Haibin Zheng, Xueke Wang, Zhaoyan Ming
As existing episodes mainly focused on attack success rate with patch-based samples, defense algorithms can easily detect these poisoning samples.
no code implementations • 18 Dec 2020 • Jinyin Chen, Zhen Wang, Haibin Zheng, Jun Xiao, Zhaoyan Ming
This work proposes a generic evaluation metric ROBY, a novel attack-independent robustness measure based on the model's decision boundaries.
no code implementations • 18 Nov 2020 • Jinyin Chen, Yunyi Xie, Jian Zhang, Xincheng Shu, Qi Xuan
In this paper, we introduce time-series snapshot network (TSSN) which is a mixture network to model the interactions among users and developers.
Social and Information Networks
no code implementations • 3 May 2020 • Liang Huang, You Zhang, Weijian Pan, Jinyin Chen, Li Ping Qian, Yuan Wu
Extensive numerical results show both the CNN-based classifier and LSTM-based classifier extract similar radio features relating to modulation reference points.
no code implementations • 26 Feb 2020 • Jinyin Chen, Yixian Chen, Haibin Zheng, Shijing Shen, Shanqing Yu, Dan Zhang, Qi Xuan
The adversarial attack methods based on gradient information can adequately find the perturbations, that is, the combinations of rewired links, thereby reducing the effectiveness of the deep learning model based graph embedding algorithms, but it is also easy to fall into a local optimum.
Social and Information Networks
no code implementations • 24 Nov 2019 • Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Qi Xuan
In this work, we present the first study of adversarial attack on dynamic network link prediction (DNLP).
no code implementations • 22 Oct 2019 • Jinyin Chen, Yixian Chen, Lihong Chen, Minghao Zhao, Qi Xuan
In this paper, we formalize this community detection attack problem in three scales, including global attack (macroscale), target community attack (mesoscale) and target node attack (microscale).
Social and Information Networks Physics and Society
no code implementations • 21 Jul 2019 • Yun Xiang, Zhuangzhi Chen, Zuohui Chen, Zebin Fang, Haiyang Hao, Jinyin Chen, Yi Liu, Zhefu Wu, Qi Xuan, Xiaoniu Yang
However, recent studies indicate that they are also vulnerable to adversarial attacks.
no code implementations • 27 May 2019 • Qi Xuan, Jun Zheng, Lihong Chen, Shanqing Yu, Jinyin Chen, Dan Zhang, Qingpeng Zhang Member
Since a large number of downstream network algorithms, such as community detection and node classification, rely on the Euclidean distance between nodes to evaluate the similarity between them in the embedding space, EDA can be considered as a universal attack on a variety of network algorithms.
Social and Information Networks Physics and Society
no code implementations • 1 May 2019 • Jinyin Chen, Mengmeng Su, Shijing Shen, Hui Xiong, Haibin Zheng
In this paper, comprehensive evaluation metrics are brought up for different adversarial attack methods.
no code implementations • 12 Apr 2019 • Jinyin Chen, Yangyang Wu, Lu Fan, Xiang Lin, Haibin Zheng, Shanqing Yu, Qi Xuan
In particular, we use a bipartite network to construct the user-item network, and represent the interactions among users (or items) by the corresponding one-mode projection network.
no code implementations • 11 Mar 2019 • Jinyin Chen, Yangyang Wu, Xiang Lin, Qi Xuan
In this paper, we are interested in the possibility of defense against adversarial attack on network, and propose defense strategies for GNNs against attacks.
Social and Information Networks Physics and Society
1 code implementation • 22 Feb 2019 • Jinyin Chen, Jian Zhang, Xuanheng Xu, Chengbo Fu, Dan Zhang, Qingpeng Zhang, Qi Xuan
Predicting the potential relations between nodes in networks, known as link prediction, has long been a challenge in network science.
2 code implementations • 2020 • Jinyin Chen, Xuanheng Xu, Yangyang Wu, Haibin Zheng
To the best of our knowledge, it is the first time that GCN embedded LSTM is put forward for link prediction of dynamic networks.
Social and Information Networks Physics and Society
no code implementations • 1 Dec 2018 • Jinyin Chen, Haibin Zheng, Hui Xiong, Mengmeng Su
Inspired by the correlations between adversarial perturbations and object contour, slighter perturbations is produced via focusing on object contour features, which is more imperceptible and difficult to be defended, especially network add-on defense methods with the trade-off between perturbations filtering and contour feature loss.
no code implementations • 1 Nov 2018 • Jinyin Chen, Lihong Chen, Yixian Chen, Minghao Zhao, Shanqing Yu, Qi Xuan, Xiaoniu Yang
In particular, we first give two heuristic attack strategies, i. e., Community Detection Attack (CDA) and Degree Based Attack (DBA), as baselines, utilizing the information of detected community structure and node degree, respectively.
Social and Information Networks
no code implementations • 2 Oct 2018 • Jinyin Chen, Ziqiang Shi, Yangyang Wu, Xuanheng Xu, Haibin Zheng
Deep neural network has shown remarkable performance in solving computer vision and some graph evolved tasks, such as node classification and link prediction.
Physics and Society Social and Information Networks
no code implementations • 8 Sep 2018 • Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, Qi Xuan
Network embedding maps a network into a low-dimensional Euclidean space, and thus facilitate many network analysis tasks, such as node classification, link prediction and community detection etc, by utilizing machine learning methods.
Physics and Society Social and Information Networks