no code implementations • 20 Mar 2022 • Alessandro Barp, Lancelot Da Costa, Guilherme França, Karl Friston, Mark Girolami, Michael I. Jordan, Grigorios A. Pavliotis
In this chapter, we identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.
no code implementations • 23 Jul 2021 • Guilherme França, Alessandro Barp, Mark Girolami, Michael I. Jordan
Optimization tasks are crucial in statistical machine learning.
no code implementations • 1 Jan 2021 • Salma Tarmoun, Guilherme França, Benjamin David Haeffele, Rene Vidal
More precisely, gradient flow preserves the difference of the Gramian~matrices of the input and output weights and we show that the amount of acceleration depends on both the magnitude of that difference (which is fixed at initialization) and the spectrum of the data.
1 code implementation • 5 Sep 2020 • Guilherme França, José Bento
For simple algorithms such as gradient descent the dependency of the convergence time with the topology of this network is well-known.
no code implementations • 15 Apr 2020 • Guilherme França, Michael. I. Jordan, René Vidal
More specifically, we show that a generalization of symplectic integrators to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
no code implementations • 2 Aug 2019 • Guilherme França, Daniel P. Robinson, René Vidal
We show that similar discretization schemes applied to Newton's equation with an additional dissipative force, which we refer to as accelerated gradient flow, allow us to obtain accelerated variants of all these proximal algorithms -- the majority of which are new although some recover known cases in the literature.
1 code implementation • NeurIPS 2020 • Guilherme França, Jeremias Sulam, Daniel P. Robinson, René Vidal
Arguably, the two most popular accelerated or momentum-based optimization methods in machine learning are Nesterov's accelerated gradient and Polyaks's heavy ball, both corresponding to different discretizations of a particular second order differential equation with friction.
no code implementations • 13 Aug 2018 • Guilherme França, Daniel P. Robinson, René Vidal
Recently, there has been great interest in connections between continuous-time dynamical systems and optimization methods, notably in the context of accelerated methods for smooth and unconstrained problems.
no code implementations • 13 Jan 2018 • Sam Safavi, Bikash Joshi, Guilherme França, José Bento
The framework of Integral Quadratic Constraints (IQC) introduced by Lessard et al. (2014) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP).
no code implementations • 26 Oct 2017 • Guilherme França, Maria L. Rizzo, Joshua T. Vogelstein
In this paper, we consider a formulation for the clustering problem using a weighted version of energy statistics in spaces of negative type.
no code implementations • 2 Oct 2017 • Guilherme França, José Bento
Here we provide a full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph.
no code implementations • 10 Mar 2017 • Guilherme França, José Bento
The time to converge to the steady state of a finite Markov chain can be greatly reduced by a lifting operation, which creates a new Markov chain on an expanded state space.
no code implementations • 10 Mar 2017 • Guilherme França, José Bento
The framework of Integral Quadratic Constraints (IQC) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to a semi-definite program (SDP).
no code implementations • 7 Dec 2015 • Guilherme França, José Bento
In this paper we provide an exact analytical solution to this SDP and obtain a general and explicit upper bound on the convergence rate of the entire family of over-relaxed ADMM.