Learning Discrete Structures for Graph Neural Networks

28 Mar 2019  ·  Luca Franceschi, Mathias Niepert, Massimiliano Pontil, Xiao He ·

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Node Classification Citeseer LDS-GNN Accuracy 75.0 # 25
Node Classification CiteSeer with Public Split: fixed 20 nodes per class LDS-GNN Accuracy 75.0% # 3
Node Classification Cora LDS-GNN Accuracy 84.08 ± 0.4% # 34
Node Classification Cora: fixed 20 node per class LDS-GNN Accuracy 84.1 # 3
Node Classification Cora with Public Split: fixed 20 nodes per class LDS-GNN Accuracy 84.1% # 11

Methods