Search Results for author: Yulin Zhu

Found 10 papers, 2 papers with code

Universally Robust Graph Neural Networks by Preserving Neighbor Similarity

no code implementations18 Jan 2024 Yulin Zhu, Yuni Lai, Xing Ai, Kai Zhou

This theoretical proof explains the empirical observations that the graph attacker tends to connect dissimilar node pairs based on the similarities of neighbor features instead of ego features both on homophilic and heterophilic graphs.

Adversarial Robustness

Cost Aware Untargeted Poisoning Attack against Graph Neural Networks,

no code implementations12 Dec 2023 Yuwei Han, Yuni Lai, Yulin Zhu, Kai Zhou

Graph Neural Networks (GNNs) have become widely used in the field of graph mining.

Graph Mining

Node-aware Bi-smoothing: Certified Robustness against Graph Injection Attacks

no code implementations7 Dec 2023 Yuni Lai, Yulin Zhu, Bailin Pan, Kai Zhou

Furthermore, we extend two state-of-the-art certified robustness frameworks to address node injection attacks and compare our approach against them.

Graph Learning Node Classification +1

Graph Anomaly Detection at Group Level: A Topology Pattern Enhanced Unsupervised Approach

no code implementations2 Aug 2023 Xing Ai, Jialong Zhou, Yulin Zhu, Gaolei Li, Tomasz P. Michalak, Xiapu Luo, Kai Zhou

Graph anomaly detection (GAD) has achieved success and has been widely applied in various domains, such as fraud detection, cybersecurity, finance security, and biochemistry.

Contrastive Learning Fraud Detection +1

Coupled-Space Attacks against Random-Walk-based Anomaly Detection

2 code implementations26 Jul 2023 Yuni Lai, Marcin Waniek, Liying Li, Jingwen Wu, Yulin Zhu, Tomasz P. Michalak, Talal Rahwan, Kai Zhou

In addition, we conduct transfer attack experiments in a black-box setting, which show that our feature attack significantly decreases the anomaly scores of target nodes.

Graph Anomaly Detection

Homophily-Driven Sanitation View for Robust Graph Contrastive Learning

no code implementations24 Jul 2023 Yulin Zhu, Xing Ai, Yevgeniy Vorobeychik, Kai Zhou

We conduct extensive experiments to evaluate the performance of our proposed model, GCHS (Graph Contrastive Learning with Homophily-driven Sanitation View), against two state of the art structural attacks on GCL.

Adversarial Robustness Contrastive Learning

Simple yet Effective Gradient-Free Graph Convolutional Networks

no code implementations1 Feb 2023 Yulin Zhu, Xing Ai, Qimai Li, Xiao-Ming Wu, Kai Zhou

Linearized Graph Neural Networks (GNNs) have attracted great attention in recent years for graph representation learning.

Graph Representation Learning Node Classification

FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification

no code implementations25 Oct 2022 Yulin Zhu, Liang Tong, Gaolei Li, Xiapu Luo, Kai Zhou

Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models.

Adversarial Robustness Data Poisoning +2

BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection

1 code implementation18 Jun 2021 Yulin Zhu, Yuni Lai, Kaifa Zhao, Xiapu Luo, Mingquan Yuan, Jian Ren, Kai Zhou

Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs as well as recent advances in graph mining techniques.

Anomaly Detection Combinatorial Optimization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.