Search Results for author: Matilde Gargiani

Found 8 papers, 2 papers with code

Policy Iteration for Multiplicative Noise Output Feedback Control

no code implementations31 Mar 2022 Benjamin Gravell, Matilde Gargiani, John Lygeros, Tyler H. Summers

We propose a policy iteration algorithm for solving the multiplicative noise linear quadratic output feedback design problem.

Data-Driven Optimal Control of Affine Systems: A Linear Programming Perspective

no code implementations22 Mar 2022 Andrea Martinelli, Matilde Gargiani, Marina Draskovic, John Lygeros

In this letter, we discuss the problem of optimal control for affine systems in the context of data-driven linear programming.

LEMMA

Convergence Analysis of Homotopy-SGD for non-convex optimization

no code implementations20 Nov 2020 Matilde Gargiani, Andrea Zanelli, Quoc Tran-Dinh, Moritz Diehl, Frank Hutter

In this work, we present a first-order stochastic algorithm based on a combination of homotopy methods and SGD, called Homotopy-Stochastic Gradient Descent (H-SGD), which finds interesting connections with some proposed heuristics in the literature, e. g. optimization by Gaussian continuation, training by diffusion, mollifying networks.

On the Promise of the Stochastic Generalized Gauss-Newton Method for Training DNNs

1 code implementation3 Jun 2020 Matilde Gargiani, Andrea Zanelli, Moritz Diehl, Frank Hutter

This enables researchers to further study and improve this promising optimization technique and hopefully reconsider stochastic second-order methods as competitive optimization techniques for training DNNs; we also hope that the promise of SGN may lead to forward automatic differentiation being added to Tensorflow or Pytorch.

Second-order methods

Transferring Optimality Across Data Distributions via Homotopy Methods

no code implementations ICLR 2020 Matilde Gargiani, Andrea Zanelli, Quoc Tran Dinh, Moritz Diehl, Frank Hutter

Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available.

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings

1 code implementation10 Oct 2019 Matilde Gargiani, Aaron Klein, Stefan Falkner, Frank Hutter

We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves.

BIG-bench Machine Learning Hyperparameter Optimization

A Distributed Second-Order Algorithm You Can Trust

no code implementations ICML 2018 Celestine Dünner, Aurelien Lucchi, Matilde Gargiani, An Bian, Thomas Hofmann, Martin Jaggi

Due to the rapid growth of data and computational resources, distributed optimization has become an active research area in recent years.

Distributed Optimization Second-order methods

Cannot find the paper you are looking for? You can Submit a new open access paper.