Search Results for author: Jorge Nocedal

Found 10 papers, 3 papers with code

Constrained and Composite Optimization via Adaptive Sampling Methods

no code implementations31 Dec 2020 Yuchen Xie, Raghu Bollapragada, Richard Byrd, Jorge Nocedal

The motivation for this paper stems from the desire to develop an adaptive sampling method for solving constrained optimization problems in which the objective function is stochastic and the constraints are deterministic.

A Noise-Tolerant Quasi-Newton Algorithm for Unconstrained Optimization

1 code implementation9 Oct 2020 Hao-Jun Michael Shi, Yuchen Xie, Richard Byrd, Jorge Nocedal

This paper describes an extension of the BFGS and L-BFGS methods for the minimization of a nonlinear function subject to errors.

Optimization and Control

A Progressive Batching L-BFGS Method for Machine Learning

no code implementations ICML 2018 Raghu Bollapragada, Dheevatsa Mudigere, Jorge Nocedal, Hao-Jun Michael Shi, Ping Tak Peter Tang

The standard L-BFGS method relies on gradient approximations that are not dominated by noise, so that search directions are descent directions, the line search is reliable, and quasi-Newton updating yields useful quadratic models of the objective function.

BIG-bench Machine Learning

Adaptive Sampling Strategies for Stochastic Optimization

no code implementations30 Oct 2017 Raghu Bollapragada, Richard Byrd, Jorge Nocedal

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations.

regression Stochastic Optimization

An Investigation of Newton-Sketch and Subsampled Newton Methods

no code implementations17 May 2017 Albert S. Berahas, Raghu Bollapragada, Jorge Nocedal

Sketching, a dimensionality reduction technique, has received much attention in the statistics community.

Dimensionality Reduction

Exact and Inexact Subsampled Newton Methods for Optimization

no code implementations27 Sep 2016 Raghu Bollapragada, Richard Byrd, Jorge Nocedal

The paper studies the solution of stochastic optimization problems in which approximations to the gradient and Hessian are obtained through subsampling.

Stochastic Optimization

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

9 code implementations15 Sep 2016 Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang

The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks.

Optimization Methods for Large-Scale Machine Learning

4 code implementations15 Jun 2016 Léon Bottou, Frank E. Curtis, Jorge Nocedal

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications.

BIG-bench Machine Learning Text Classification

A Multi-Batch L-BFGS Method for Machine Learning

no code implementations NeurIPS 2016 Albert S. Berahas, Jorge Nocedal, Martin Takáč

The question of how to parallelize the stochastic gradient descent (SGD) method has received much attention in the literature.

BIG-bench Machine Learning Distributed Computing

Newton-Like Methods for Sparse Inverse Covariance Estimation

no code implementations NeurIPS 2012 Figen Oztoprak, Jorge Nocedal, Steven Rennie, Peder A. Olsen

The second approach, which we call the Orthant-Based Newton method, is a two-phase algorithm that first identifies an orthant face and then minimizes a smooth quadratic approximation of the objective function using the conjugate gradient method.

Cannot find the paper you are looking for? You can Submit a new open access paper.