A modified limited memory Nesterov's accelerated quasi-Newton

The Nesterov's accelerated quasi-Newton (L)NAQ method has shown to accelerate the conventional (L)BFGS quasi-Newton method using the Nesterov's accelerated gradient in several neural network (NN) applications. However, the calculation of two gradients per iteration increases the computational cost. The Momentum accelerated Quasi-Newton (MoQ) method showed that the Nesterov's accelerated gradient can be approximated as a linear combination of past gradients. This abstract extends the MoQ approximation to limited memory NAQ and evaluates the performance on a function approximation problem.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here