no code implementations • 22 Oct 2023 • Floor Eijkelboom, Erik Bekkers, Michael Bronstein, Francesco Di Giovanni
This suggests that the importance of message passing is limited when the model can construct strong structural encodings.
no code implementations • 2 Oct 2023 • Federico Barbero, Ameya Velingker, Amin Saberi, Michael Bronstein, Francesco Di Giovanni
Graph Neural Networks (GNNs) are popular models for machine learning on graphs that typically follow the message-passing paradigm, whereby the feature of a node is updated recursively upon aggregating information over its neighbors.
no code implementations • 6 Jun 2023 • Francesco Di Giovanni, T. Konstantin Rusch, Michael M. Bronstein, Andreea Deac, Marc Lackenby, Siddhartha Mishra, Petar Veličković
In this paper, we provide a rigorous analysis to determine which function classes of node features can be learned by an MPNN of a given capacity.
1 code implementation • 17 May 2023 • Emanuele Rossi, Bertrand Charpentier, Francesco Di Giovanni, Fabrizio Frasca, Stephan Günnemann, Michael Bronstein
Graph Neural Networks (GNNs) have become the de-facto standard tool for modeling relational data.
1 code implementation • 13 May 2023 • Benjamin Gutteridge, Xiaowen Dong, Michael Bronstein, Francesco Di Giovanni
Message passing neural networks (MPNNs) have been shown to suffer from the phenomenon of over-squashing that causes poor performance for tasks relying on long-range interactions.
Ranked #1 on Graph Classification on Peptides-func
1 code implementation • 6 Feb 2023 • Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio', Michael Bronstein
Our analysis provides a unified framework to study different recent methods introduced to cope with over-squashing and serves as a justification for a class of methods that fall under graph rewiring.
2 code implementations • 22 Jun 2022 • Francesco Di Giovanni, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, Michael M. Bronstein
We do so by showing that linear graph convolutions with symmetric weights minimize a multi-particle energy that generalizes the Dirichlet energy; in this setting, the weight matrices induce edge-wise attraction (repulsion) through their positive (negative) eigenvalues, thereby controlling whether the features are being smoothed or sharpened.
1 code implementation • 9 Feb 2022 • Cristian Bodnar, Francesco Di Giovanni, Benjamin Paul Chamberlain, Pietro Liò, Michael M. Bronstein
In this paper, we use cellular sheaf theory to show that the underlying geometry of the graph is deeply linked with the performance of GNNs in heterophilic settings and their oversmoothing behaviour.
no code implementations • 2 Feb 2022 • Francesco Di Giovanni, Giulia Luise, Michael Bronstein
Graph embeddings, wherein the nodes of the graph are represented by points in a continuous space, are used in a broad range of Graph ML applications.
2 code implementations • ICLR 2022 • Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, Michael M. Bronstein
Most graph neural networks (GNNs) use the message passing paradigm, in which node features are propagated on the input graph.
Ranked #43 on Node Classification on Citeseer
1 code implementation • NeurIPS 2021 • Benjamin Paul Chamberlain, James Rowbottom, Davide Eynard, Francesco Di Giovanni, Xiaowen Dong, Michael M Bronstein
We propose a novel class of graph neural networks based on the discretised Beltrami flow, a non-Euclidean diffusion PDE.