Search Results for author: Nikita Doikov

Found 11 papers, 1 papers with code

First and zeroth-order implementations of the regularized Newton method with lazy approximated Hessians

no code implementations5 Sep 2023 Nikita Doikov, Geovani Nunes Grapiglia

In this work, we develop first-order (Hessian-free) and zero-order (derivative-free) implementations of the Cubically regularized Newton method for solving general non-convex optimization problems.

Minimizing Quasi-Self-Concordant Functions by Gradient Regularization of Newton Method

no code implementations28 Aug 2023 Nikita Doikov

In this paper, we show that for minimizing Quasi-Self-Concordant functions we can use instead the basic Newton Method with Gradient Regularization.

On Convergence of Incremental Gradient for Non-Convex Smooth Functions

no code implementations30 May 2023 Anastasia Koloskova, Nikita Doikov, Sebastian U. Stich, Martin Jaggi

In machine learning and neural network optimization, algorithms like incremental gradient, and shuffle SGD are popular due to minimizing the number of cache misses and good practical convergence behavior.

Linearization Algorithms for Fully Composite Optimization

no code implementations24 Feb 2023 Maria-Luiza Vladarean, Nikita Doikov, Martin Jaggi, Nicolas Flammarion

This paper studies first-order algorithms for solving fully composite optimization problems over convex and compact sets.

Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods

no code implementations23 Feb 2023 El Mahdi Chayti, Nikita Doikov, Martin Jaggi

Our helper framework offers the algorithm designer high flexibility for constructing and analyzing the stochastic Cubic Newton methods, allowing arbitrary size batches, and the use of noisy and possibly biased estimates of the gradients and Hessians, incorporating both the variance reduction and the lazy Hessian updates.

Auxiliary Learning

Polynomial Preconditioning for Gradient Methods

no code implementations30 Jan 2023 Nikita Doikov, Anton Rodomanov

We study first-order methods with preconditioning for solving structured nonlinear convex optimization problems.

Second-order optimization with lazy Hessians

no code implementations1 Dec 2022 Nikita Doikov, El Mahdi Chayti, Martin Jaggi

This provably improves the total arithmetical complexity of second-order algorithms by a factor $\sqrt{d}$.

Super-Universal Regularized Newton Method

no code implementations11 Aug 2022 Nikita Doikov, Konstantin Mishchenko, Yurii Nesterov

We analyze the performance of a variant of Newton method with quadratic regularization for solving composite convex minimization problems.

Stochastic Subspace Cubic Newton Method

no code implementations ICML 2020 Filip Hanzely, Nikita Doikov, Peter Richtárik, Yurii Nesterov

In this paper, we propose a new randomized second-order optimization algorithm---Stochastic Subspace Cubic Newton (SSCN)---for minimizing a high dimensional convex function $f$.

Second-order methods

Inexact Tensor Methods with Dynamic Accuracies

1 code implementation ICML 2020 Nikita Doikov, Yurii Nesterov

In this paper, we study inexact high-order Tensor Methods for solving convex optimization problems with composite objective.

Optimization and Control

Randomized Block Cubic Newton Method

no code implementations ICML 2018 Nikita Doikov, Peter Richtarik, University Edinburgh

To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.