1 code implementation • 26 Feb 2024 • Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez de Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon
Specifically, when updating the parameter matrices of a neural network by adding a product $BA$, we observe that the $B$ and $A$ matrices have distinct functions: $A$ extracts features from the input, while $B$ uses these features to create the desired output.
no code implementations • 5 Feb 2024 • Haitz Sáez de Ocáriz Borde, Takashi Furuya, Anastasis Kratsios, Marc T. Law
This improves the optimal bounds for traditional non-distributed deep learning models, namely ReLU MLPs, which need $\mathcal{O}(\varepsilon^{-n/2})$ parameters to achieve the same accuracy.
no code implementations • 20 Nov 2023 • Yuan Lu, Haitz Sáez de Ocáriz Borde, Pietro Liò
More importantly, our interpretability framework provides a general approach for quantitatively comparing embedding spaces across different tasks based on their contributions, a dimension that has been overlooked in previous literature on latent graph inference.
no code implementations • 23 Oct 2023 • Haitz Sáez de Ocáriz Borde, Anastasis Kratsios
Furthermore, when the latent graph can be represented in the feature space of a sufficiently regular kernel, we show that the combined neural snowflake and MLP encoder do not succumb to the curse of dimensionality by using only a low-degree polynomial number of parameters in the number of nodes.
no code implementations • 19 Oct 2023 • Christopher Scarvelis, Haitz Sáez de Ocáriz Borde, Justin Solomon
In this work, we instead explicitly smooth the closed-form score to obtain an SGM that generates novel samples without training.
no code implementations • 18 Aug 2023 • Anastasis Kratsios, Ruiyang Hong, Haitz Sáez de Ocáriz Borde
We find that the network complexity of HNN implementing the graph representation is independent of the representation fidelity/distortion.
no code implementations • 21 Mar 2023 • Haitz Sáez de Ocáriz Borde, Álvaro Arroyo, Ingmar Posner
Graph Neural Networks leverage the connectivity structure of graphs as an inductive bias.
no code implementations • 26 Nov 2022 • Haitz Sáez de Ocáriz Borde, Anees Kazi, Federico Barbero, Pietro Liò
The original dDGM architecture used the Euclidean plane to encode latent features based on which the latent graphs were generated.
no code implementations • 24 Sep 2022 • Haitz Sáez de Ocáriz Borde, Federico Barbero
We demonstrate the applicability of model-agnostic algorithms for meta-learning, specifically Reptile, to GNN models in molecular regression tasks.
1 code implementation • 17 Jun 2022 • Federico Barbero, Cristian Bodnar, Haitz Sáez de Ocáriz Borde, Michael Bronstein, Petar Veličković, Pietro Liò
A Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that operates on a sheaf, an object that equips a graph with vector spaces over its nodes and edges and linear maps between these spaces.
Ranked #6 on Node Classification on Wisconsin
no code implementations • 26 Nov 2021 • Haitz Sáez de Ocáriz Borde
Memory replay may be key to learning in biological brains, which manage to learn new tasks continually without catastrophically interfering with previous knowledge.
no code implementations • 30 Oct 2021 • Haitz Sáez de Ocáriz Borde, David Sondak, Pavlos Protopapas
The Reynolds-averaged Navier-Stokes (RANS) equations require accurate modeling of the anisotropic Reynolds stress tensor.