no code implementations • 3 Aug 2023 • Asen Nachkov, Luchen Li, Giulia Luise, Filippo Valdettaro, Aldo Faisal
To test whether optimistic ensemble method can improve on distributional RL as did on scalar RL, by e. g. Bootstrapped DQN, we implement the BoP approach with a population of distributional actor-critics using Bayesian Distributional Policy Gradients (BDPG).
1 code implementation • 6 Feb 2023 • Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio', Michael Bronstein
Our analysis provides a unified framework to study different recent methods introduced to cope with over-squashing and serves as a justification for a class of methods that fall under graph rewiring.
no code implementations • 11 Oct 2022 • Ruohan Wang, Marco Ciccone, Giulia Luise, Andrew Yapp, Massimiliano Pontil, Carlo Ciliberto
A continual learning (CL) algorithm learns from a non-stationary data stream.
1 code implementation • 10 Jun 2022 • Brandon Amos, samuel cohen, Giulia Luise, Ievgen Redko
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT.
no code implementations • 2 Feb 2022 • Francesco Di Giovanni, Giulia Luise, Michael Bronstein
Graph embeddings, wherein the nodes of the graph are represented by points in a continuous space, are used in a broad range of Graph ML applications.
no code implementations • 16 Sep 2021 • Paul Festor, Giulia Luise, Matthieu Komorowski, A. Aldo Faisal
Reinforcement Learning (RL) is emerging as tool for tackling complex control and decision-making problems.
no code implementations • NeurIPS 2020 • Luca Oneto, Michele Donini, Giulia Luise, Carlo Ciliberto, Andreas Maurer, Massimiliano Pontil
One way to reach this goal is by modifying the data representation in order to meet certain fairness constraints.
no code implementations • 29 Jul 2020 • Giulia Luise, Massimiliano Pontil, Carlo Ciliberto
The Generative Adversarial Networks (GAN) framework is a well-established paradigm for probability matching and realistic sample generation.
1 code implementation • 22 Jun 2020 • Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth
Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces.
no code implementations • NeurIPS 2020 • Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, Arthur Gretton
We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\mathbb{R}^d$.
no code implementations • NeurIPS 2020 • Adil Salim, Anna Korba, Giulia Luise
Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space.
1 code implementation • NeurIPS 2019 • Giulia Luise, Saverio Salzo, Massimiliano Pontil, Carlo Ciliberto
We present a novel algorithm to estimate the barycenter of arbitrary probability distributions with respect to the Sinkhorn divergence.
no code implementations • 2 Mar 2019 • Giulia Luise, Dimitris Stamos, Massimiliano Pontil, Carlo Ciliberto
We study the interplay between surrogate methods for structured prediction and techniques from multitask learning designed to leverage relationships between surrogate outputs.
2 code implementations • NeurIPS 2018 • Giulia Luise, Alessandro Rudi, Massimiliano Pontil, Carlo Ciliberto
Applications of optimal transport have recently gained remarkable attention thanks to the computational advantages of entropic regularization.