Search Results for author: Raghu Bollapragada

Found 11 papers, 0 papers with code

Adaptive Consensus: A network pruning approach for decentralized optimization

no code implementations6 Sep 2023 Suhail M. Shah, Albert S. Berahas, Raghu Bollapragada

We consider network-based decentralized optimization problems, where each node in the network possesses a local function and the objective is to collectively attain a consensus solution that minimizes the sum of all the local functions.

Network Pruning

On the fast convergence of minibatch heavy ball momentum

no code implementations15 Jun 2022 Raghu Bollapragada, Tyler Chen, Rachel Ward

Simple stochastic momentum methods are widely used in machine learning optimization, but their good practical performance is at odds with an absence of theoretical guarantees of acceleration in the literature.

A Retrospective Approximation Approach for Smooth Stochastic Optimization

no code implementations7 Mar 2021 David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip

Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA) where, during each iteration, a "deterministic solver" executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency.

Image Classification Stochastic Optimization

Constrained and Composite Optimization via Adaptive Sampling Methods

no code implementations31 Dec 2020 Yuchen Xie, Raghu Bollapragada, Richard Byrd, Jorge Nocedal

The motivation for this paper stems from the desire to develop an adaptive sampling method for solving constrained optimization problems in which the objective function is stochastic and the constraints are deterministic.

Adaptive Sampling Quasi-Newton Methods for Derivative-Free Stochastic Optimization

no code implementations29 Oct 2019 Raghu Bollapragada, Stefan M. Wild

We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning.

reinforcement-learning Reinforcement Learning (RL) +1

A Progressive Batching L-BFGS Method for Machine Learning

no code implementations ICML 2018 Raghu Bollapragada, Dheevatsa Mudigere, Jorge Nocedal, Hao-Jun Michael Shi, Ping Tak Peter Tang

The standard L-BFGS method relies on gradient approximations that are not dominated by noise, so that search directions are descent directions, the line search is reliable, and quasi-Newton updating yields useful quadratic models of the objective function.

BIG-bench Machine Learning

Adaptive Sampling Strategies for Stochastic Optimization

no code implementations30 Oct 2017 Raghu Bollapragada, Richard Byrd, Jorge Nocedal

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations.

regression Stochastic Optimization

An Investigation of Newton-Sketch and Subsampled Newton Methods

no code implementations17 May 2017 Albert S. Berahas, Raghu Bollapragada, Jorge Nocedal

Sketching, a dimensionality reduction technique, has received much attention in the statistics community.

Dimensionality Reduction

Exact and Inexact Subsampled Newton Methods for Optimization

no code implementations27 Sep 2016 Raghu Bollapragada, Richard Byrd, Jorge Nocedal

The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.