Search Results for author: Hadrien Hendrikx

Found 10 papers, 3 papers with code

The Relative Gaussian Mechanism and its Application to Private Gradient Descent

no code implementations29 Aug 2023 Hadrien Hendrikx, Paul Mangold, Aurélien Bellet

Leveraging this assumption, we introduce the Relative Gaussian Mechanism (RGM), in which the variance of the noise depends on the norm of the output.

Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

no code implementations2 May 2023 Anastasia Koloskova, Hadrien Hendrikx, Sebastian U. Stich

In particular, we show that (i) for deterministic gradient descent, the clipping threshold only affects the higher-order terms of convergence, (ii) in the stochastic setting convergence to the true optimum cannot be guaranteed under the standard noise assumption, even under arbitrary small step-sizes.

Beyond spectral gap: The role of the topology in decentralized learning

1 code implementation7 Jun 2022 Thijs Vogels, Hadrien Hendrikx, Martin Jaggi

In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster.

Distributed Optimization

A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip

1 code implementation10 Jun 2021 Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, Adrien Taylor

We introduce the continuized Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter.

Asynchronous speedup in decentralized optimization

no code implementations7 Jun 2021 Mathieu Even, Hadrien Hendrikx, Laurent Massoulie

Our approach yields a precise characterization of convergence time and of its dependency on heterogeneous delays in the network.

Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums

no code implementations28 Jan 2019 Hadrien Hendrikx, Francis Bach, Laurent Massoulié

In this work, we study the problem of minimizing the sum of strongly convex functions split over a network of $n$ nodes.

Optimization and Control Distributed, Parallel, and Cluster Computing

Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives

no code implementations5 Oct 2018 Hadrien Hendrikx, Francis Bach, Laurent Massoulié

Applying $ESDACD$ to quadratic local functions leads to an accelerated randomized gossip algorithm of rate $O( \sqrt{\theta_{\rm gossip}/n})$ where $\theta_{\rm gossip}$ is the rate of the standard randomized gossip.

Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning

no code implementations NeurIPS 2017 El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, Alexandre Maurer

We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners.

Multi-agent Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.