no code implementations • 18 Jan 2024 • Yulin Zhu, Yuni Lai, Xing Ai, Kai Zhou
This theoretical proof explains the empirical observations that the graph attacker tends to connect dissimilar node pairs based on the similarities of neighbor features instead of ego features both on homophilic and heterophilic graphs.
no code implementations • 12 Dec 2023 • Yuwei Han, Yuni Lai, Yulin Zhu, Kai Zhou
Graph Neural Networks (GNNs) have become widely used in the field of graph mining.
no code implementations • 7 Dec 2023 • Yuni Lai, Yulin Zhu, Bailin Pan, Kai Zhou
Furthermore, we extend two state-of-the-art certified robustness frameworks to address node injection attacks and compare our approach against them.
no code implementations • 2 Aug 2023 • Xing Ai, Jialong Zhou, Yulin Zhu, Gaolei Li, Tomasz P. Michalak, Xiapu Luo, Kai Zhou
Graph anomaly detection (GAD) has achieved success and has been widely applied in various domains, such as fraud detection, cybersecurity, finance security, and biochemistry.
2 code implementations • 26 Jul 2023 • Yuni Lai, Marcin Waniek, Liying Li, Jingwen Wu, Yulin Zhu, Tomasz P. Michalak, Talal Rahwan, Kai Zhou
In addition, we conduct transfer attack experiments in a black-box setting, which show that our feature attack significantly decreases the anomaly scores of target nodes.
no code implementations • 24 Jul 2023 • Yulin Zhu, Xing Ai, Yevgeniy Vorobeychik, Kai Zhou
We conduct extensive experiments to evaluate the performance of our proposed model, GCHS (Graph Contrastive Learning with Homophily-driven Sanitation View), against two state of the art structural attacks on GCL.
no code implementations • 1 Feb 2023 • Yulin Zhu, Xing Ai, Qimai Li, Xiao-Ming Wu, Kai Zhou
Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning.
no code implementations • 8 Nov 2022 • Yuni Lai, Yulin Zhu, Wenqi Fan, Xiaoge Zhang, Kai Zhou
The robustness of recommender systems under node injection attacks has garnered significant attention.
no code implementations • 25 Oct 2022 • Yulin Zhu, Liang Tong, Gaolei Li, Xiapu Luo, Kai Zhou
Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models.
1 code implementation • 18 Jun 2021 • Yulin Zhu, Yuni Lai, Kaifa Zhao, Xiapu Luo, Mingquan Yuan, Jian Ren, Kai Zhou
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs as well as recent advances in graph mining techniques.