GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with Realistic Access to GNN Models

3 Nov 2023  ·  Zeyu Song, Ehsanul Kabir, Shagufta Mehnaz ·

Graph Neural Networks (GNNs) have increasingly become an indispensable tool in learning from graph-structured data, catering to various applications including social network analysis, recommendation systems, etc. At the heart of these networks are the edges which are crucial in guiding GNN models' predictions. In many scenarios, these edges represent sensitive information, such as personal associations or financial dealings -- thus requiring privacy assurance. However, their contributions to GNN model predictions may in turn be exploited by the adversary to compromise their privacy. Motivated by these conflicting requirements, this paper investigates edge privacy in contexts where adversaries possess black-box GNN model access, restricted further by access controls, preventing direct insights into arbitrary node outputs. In this context, we introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs. These strategies allow adversaries to deduce connections between two nodes not by directly analyzing the model's output for these pairs but by analyzing the output for nodes linked to them. Our evaluation with seven real-life datasets and four GNN architectures underlines a significant vulnerability: even in systems fortified with access control mechanisms, an adaptive adversary can decipher private connections between nodes, thereby revealing potentially sensitive relationships and compromising the confidentiality of the graph.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here