Search Results for author: Carlo Alfano

Found 4 papers, 1 papers with code

Meta-learning the mirror map in policy mirror descent

no code implementations7 Feb 2024 Carlo Alfano, Sebastian Towers, Silvia Sapora, Chris Lu, Patrick Rebeschini

Policy Mirror Descent (PMD) is a popular framework in reinforcement learning, serving as a unifying perspective that encompasses numerous algorithms.

Meta-Learning

A Novel Framework for Policy Mirror Descent with General Parameterization and Linear Convergence

1 code implementation NeurIPS 2023 Carlo Alfano, Rui Yuan, Patrick Rebeschini

Modern policy optimization methods in reinforcement learning, such as TRPO and PPO, owe their success to the use of parameterized policies.

Linear Convergence for Natural Policy Gradient with Log-linear Policy Parametrization

no code implementations30 Sep 2022 Carlo Alfano, Patrick Rebeschini

We analyze the convergence rate of the unregularized natural policy gradient algorithm with log-linear policy parametrizations in infinite-horizon discounted Markov decision processes.

Dimension-Free Rates for Natural Policy Gradient in Multi-Agent Reinforcement Learning

no code implementations23 Sep 2021 Carlo Alfano, Patrick Rebeschini

Cooperative multi-agent reinforcement learning is a decentralized paradigm in sequential decision making where agents distributed over a network iteratively collaborate with neighbors to maximize global (network-wide) notions of rewards.

Decision Making Multi-agent Reinforcement Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.