no code implementations • INLG (ACL) 2020 • Federico Betti, Giorgia Ramponi, Massimo Piccardi
In recent years, generative adversarial networks (GANs) have started to attain promising results also in natural language generation.
no code implementations • 24 Feb 2024 • Adrian Müller, Pragnya Alatur, Volkan Cevher, Giorgia Ramponi, Niao He
As Efroni et al. (2020) pointed out, it is an open question whether primal-dual algorithms can provably achieve sublinear regret if we do not allow error cancellations.
no code implementations • 11 Oct 2023 • Mirco Mutti, Riccardo De Santi, Marcello Restelli, Alexander Marx, Giorgia Ramponi
The prior is typically specified as a class of parametric distributions, the design of which can be cumbersome in practice, often resulting in the choice of uninformative priors.
no code implementations • 13 Jun 2023 • Pragnya Alatur, Giorgia Ramponi, Niao He, Andreas Krause
Multi-agent reinforcement learning (MARL) addresses sequential decision-making problems with multiple agents, where each agent optimizes its own objective.
no code implementations • 12 Jun 2023 • Adrian Müller, Pragnya Alatur, Giorgia Ramponi, Niao He
Unlike existing Lagrangian approaches, our algorithm achieves this regret without the need for the cancellation of errors.
no code implementations • 20 Oct 2022 • Antonio Terpin, Nicolas Lanzetti, Batuhan Yardim, Florian Dörfler, Giorgia Ramponi
In this paper, we explore optimal transport discrepancies (which include the Wasserstein distance) to define trust regions, and we propose a novel algorithm - Optimal Transport Trust Region Policy Optimization (OT-TRPO) - for continuous state-action spaces.
no code implementations • 10 Oct 2022 • Amartya Sanyal, Giorgia Ramponi
Online learning, in the mistake bound model, is one of the most fundamental concepts in learning theory.
1 code implementation • 18 Jul 2022 • David Lindner, Andreas Krause, Giorgia Ramponi
We propose a novel IRL algorithm: Active exploration for Inverse Reinforcement Learning (AceIRL), which actively explores an unknown environment and expert policy to quickly learn the expert's reward function and identify a good policy.
no code implementations • NeurIPS 2021 • Giorgia Ramponi, Alberto Maria Metelli, Alessandro Concetti, Marcello Restelli
This presupposes that the two actors have the same reward functions.
no code implementations • NeurIPS 2020 • Giorgia Ramponi, Gianluca Drappo, Marcello Restelli
Inverse Reinforcement Learning addresses the problem of inferring an expert's reward function from demonstrations.
no code implementations • 15 Jul 2020 • Giorgia Ramponi, Marcello Restelli
In this paper, we propose NOHD (Newton Optimization on Helmholtz Decomposition), a Newton-like algorithm for multi-agent learning problems based on the decomposition of the dynamics of the system in its irrotational (Potential) and solenoidal (Hamiltonian) component.
2 code implementations • 20 Nov 2018 • Giorgia Ramponi, Pavlos Protopapas, Marco Brambilla, Ryan Janssen
Results show that classifiers trained on T-CGAN-generated data perform the same as classifiers trained on real data, even with very short time series and small training sets.