no code implementations • ICML 2020 • Paul Rolland, Armin Eftekhari, Ali Kavis, Volkan Cevher
A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA).
no code implementations • 3 Nov 2022 • Ali Kavis, Stratis Skoulakis, Kimon Antonakopoulos, Leello Tadesse Dadi, Volkan Cevher
We propose an adaptive variance-reduction method, called AdaSpider, for minimization of $L$-smooth, non-convex functions with a finite-sum structure.
no code implementations • 3 Nov 2022 • Kimon Antonakopoulos, Ali Kavis, Volkan Cevher
This work proposes a universal and adaptive second-order method for minimizing second-order smooth, convex functions.
no code implementations • ICLR 2022 • Ali Kavis, Kfir Yehuda Levy, Volkan Cevher
We present our analysis in a modular way and obtain a complementary $\mathcal O (1 / T)$ convergence rate in the deterministic setting.
no code implementations • NeurIPS 2021 • Kimon Antonakopoulos, Thomas Pethick, Ali Kavis, Panayotis Mertikopoulos, Volkan Cevher
Our first result is that the algorithm achieves the optimal rates of convergence for cocoercive problems when the profile of the randomness is known to the optimizer: $\mathcal{O}(1/\sqrt{T})$ for absolute noise profiles, and $\mathcal{O}(1/T)$ for relative ones.
no code implementations • NeurIPS 2021 • Kfir Levy, Ali Kavis, Volkan Cevher
In this work we propose $\rm{STORM}^{+}$, a new method that is completely parameter-free, does not require large batch-sizes, and obtains the optimal $O(1/T^{1/3})$ rate for finding an approximate stationary point.
no code implementations • 1 Nov 2021 • Kfir Y. Levy, Ali Kavis, Volkan Cevher
In this work we propose STORM+, a new method that is completely parameter-free, does not require large batch-sizes, and obtains the optimal $O(1/T^{1/3})$ rate for finding an approximate stationary point.
no code implementations • NeurIPS 2020 • Panayotis Mertikopoulos, Nadav Hallak, Ali Kavis, Volkan Cevher
This paper analyzes the trajectories of stochastic gradient descent (SGD) to help understand the algorithm's convergence properties in non-convex problems.
no code implementations • NeurIPS 2019 • Ali Kavis, Kfir. Y. Levy, Francis Bach, Volkan Cevher
To the best of our knowledge, this is the first adaptive, unified algorithm that achieves the optimal rates in the constrained setting.
no code implementations • 11 Dec 2018 • Paul Rolland, Ali Kavis, Alex Immer, Adish Singla, Volkan Cevher
We study the fundamental problem of learning an unknown, smooth probability function via pointwise Bernoulli tests.
no code implementations • NeurIPS 2018 • Ya-Ping Hsieh, Ali Kavis, Paul Rolland, Volkan Cevher
We consider the problem of sampling from constrained distributions, which has posed significant challenges to both non-asymptotic analysis and algorithmic design.