Search Results for author: Ali Kavis

Found 11 papers, 0 papers with code

Double-Loop Unadjusted Langevin Algorithm

no code implementations ICML 2020 Paul Rolland, Armin Eftekhari, Ali Kavis, Volkan Cevher

A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA).

Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization

no code implementations3 Nov 2022 Ali Kavis, Stratis Skoulakis, Kimon Antonakopoulos, Leello Tadesse Dadi, Volkan Cevher

We propose an adaptive variance-reduction method, called AdaSpider, for minimization of $L$-smooth, non-convex functions with a finite-sum structure.

Extra-Newton: A First Approach to Noise-Adaptive Accelerated Second-Order Methods

no code implementations3 Nov 2022 Kimon Antonakopoulos, Ali Kavis, Volkan Cevher

This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions.

Second-order methods

High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize

no code implementations ICLR 2022 Ali Kavis, Kfir Yehuda Levy, Volkan Cevher

We present our analysis in a modular way and obtain a complementary $\mathcal O (1 / T)$ convergence rate in the deterministic setting.

Sifting through the noise: Universal first-order methods for stochastic variational inequalities

no code implementations NeurIPS 2021 Kimon Antonakopoulos, Thomas Pethick, Ali Kavis, Panayotis Mertikopoulos, Volkan Cevher

Our first result is that the algorithm achieves the optimal rates of convergence for cocoercive problems when the profile of the randomness is known to the optimizer: $\mathcal{O}(1/\sqrt{T})$ for absolute noise profiles, and $\mathcal{O}(1/T)$ for relative ones.

STORM+: Fully Adaptive SGD with Recursive Momentum for Nonconvex Optimization

no code implementations NeurIPS 2021 Kfir Levy, Ali Kavis, Volkan Cevher

In this work we propose $\rm{STORM}^{+}$, a new method that is completely parameter-free, does not require large batch-sizes, and obtains the optimal $O(1/T^{1/3})$ rate for finding an approximate stationary point.

STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization

no code implementations1 Nov 2021 Kfir Y. Levy, Ali Kavis, Volkan Cevher

In this work we propose STORM+, a new method that is completely parameter-free, does not require large batch-sizes, and obtains the optimal $O(1/T^{1/3})$ rate for finding an approximate stationary point.

On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems

no code implementations NeurIPS 2020 Panayotis Mertikopoulos, Nadav Hallak, Ali Kavis, Volkan Cevher

This paper analyzes the trajectories of stochastic gradient descent (SGD) to help understand the algorithm's convergence properties in non-convex problems.

UniXGrad: A Universal, Adaptive Algorithm with Optimal Guarantees for Constrained Optimization

no code implementations NeurIPS 2019 Ali Kavis, Kfir. Y. Levy, Francis Bach, Volkan Cevher

To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting.

Efficient learning of smooth probability functions from Bernoulli tests with guarantees

no code implementations11 Dec 2018 Paul Rolland, Ali Kavis, Alex Immer, Adish Singla, Volkan Cevher

We study the fundamental problem of learning an unknown, smooth probability function via pointwise Bernoulli tests.

Mirrored Langevin Dynamics

no code implementations NeurIPS 2018 Ya-Ping Hsieh, Ali Kavis, Paul Rolland, Volkan Cevher

We consider the problem of sampling from constrained distributions, which has posed significant challenges to both non-asymptotic analysis and algorithmic design.

Cannot find the paper you are looking for? You can Submit a new open access paper.