Search Results for author: Giulia Luise

Found 14 papers, 5 papers with code

Bag of Policies for Distributional Deep Exploration

no code implementations3 Aug 2023 Asen Nachkov, Luchen Li, Giulia Luise, Filippo Valdettaro, Aldo Faisal

To test whether optimistic ensemble method can improve on distributional RL as did on scalar RL, by e. g. Bootstrapped DQN, we implement the BoP approach with a population of distributional actor-critics using Bayesian Distributional Policy Gradients (BDPG).

Atari Games Efficient Exploration +2

On Over-Squashing in Message Passing Neural Networks: The Impact of Width, Depth, and Topology

1 code implementation6 Feb 2023 Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio', Michael Bronstein

Our analysis provides a unified framework to study different recent methods introduced to cope with over-squashing and serves as a justification for a class of methods that fall under graph rewiring.

Inductive Bias

Meta Optimal Transport

1 code implementation10 Jun 2022 Brandon Amos, samuel cohen, Giulia Luise, Ievgen Redko

We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT.

Heterogeneous manifolds for curvature-aware graph embedding

no code implementations2 Feb 2022 Francesco Di Giovanni, Giulia Luise, Michael Bronstein

Graph embeddings, wherein the nodes of the graph are represented by points in a continuous space, are used in a broad range of Graph ML applications.

Graph Embedding

Generalization Properties of Optimal Transport GANs with Latent Distribution Learning

no code implementations29 Jul 2020 Giulia Luise, Massimiliano Pontil, Carlo Ciliberto

The Generative Adversarial Networks (GAN) framework is a well-established paradigm for probability matching and realistic sample generation.

Aligning Time Series on Incomparable Spaces

1 code implementation22 Jun 2020 Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth

Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces.

Dynamic Time Warping Imitation Learning +2

A Non-Asymptotic Analysis for Stein Variational Gradient Descent

no code implementations NeurIPS 2020 Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, Arthur Gretton

We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\mathbb{R}^d$.

LEMMA

The Wasserstein Proximal Gradient Algorithm

no code implementations NeurIPS 2020 Adil Salim, Anna Korba, Giulia Luise

Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space.

Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm

1 code implementation NeurIPS 2019 Giulia Luise, Saverio Salzo, Massimiliano Pontil, Carlo Ciliberto

We present a novel algorithm to estimate the barycenter of arbitrary probability distributions with respect to the Sinkhorn divergence.

Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction

no code implementations2 Mar 2019 Giulia Luise, Dimitris Stamos, Massimiliano Pontil, Carlo Ciliberto

We study the interplay between surrogate methods for structured prediction and techniques from multitask learning designed to leverage relationships between surrogate outputs.

Structured Prediction

Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance

2 code implementations NeurIPS 2018 Giulia Luise, Alessandro Rudi, Massimiliano Pontil, Carlo Ciliberto

Applications of optimal transport have recently gained remarkable attention thanks to the computational advantages of entropic regularization.

Cannot find the paper you are looking for? You can Submit a new open access paper.