no code implementations • 29 Mar 2024 • Alireza Aghasi, Saeed Ghadimi
In this paper, we study and analyze zeroth-order stochastic approximation algorithms for solving bilvel problems, when neither the upper/lower objective values, nor their unbiased gradient estimates are available.
no code implementations • 11 Jul 2023 • Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi
We develop and analyze stochastic approximation algorithms for solving nested compositional bi-level optimization problems.
no code implementations • 21 Apr 2023 • Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, Pascale Fung
Our approach differs from the standard contrastive learning framework in that it automatically obtains positive and negative signals from the safe and unsafe language distributions that have been learned beforehand.
1 code implementation • 20 Feb 2023 • Tesi Xiao, Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi
We focus on decentralized stochastic non-convex optimization, where $n$ agents work together to optimize a composite objective function which is a sum of a smooth term and a non-smooth convex term.
no code implementations • 22 Jun 2022 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi
We study stochastic optimization algorithms for constrained nonconvex stochastic optimization problems with Markovian data.
no code implementations • 26 May 2022 • Alireza Aghasi, MohammadJavad Feizollahi, Saeed Ghadimi
With the significant increase in using robust optimization techniques to train machine learning models, this paper presents a novel robust regression framework that operates by minimizing the uncertainty associated with missing data.
no code implementations • 9 Feb 2022 • Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi
We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set.
no code implementations • 1 Jan 2022 • Warren B Powell, Saeed Ghadimi
The most common approaches for solving multistage stochastic programming problems in the research literature have been to either use value functions ("dynamic programming") or scenario trees ("stochastic programming") to approximate the impact of a decision now on the future.
no code implementations • NeurIPS 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2. 5})$.
no code implementations • 28 Sep 2020 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $\tilde{\mathcal{O}}(1/\epsilon^{2. 5})$.
no code implementations • 24 Aug 2020 • Krishnakumar Balasubramanian, Saeed Ghadimi, Anthony Nguyen
We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the $T$ level case, can achieve a sample complexity of $\mathcal{O}(1/\epsilon^6)$ by using mini-batches of samples in each iteration.
no code implementations • 15 Jun 2020 • Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi
We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.
no code implementations • 31 Jul 2019 • Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
In this paper, motivated by online reinforcement learning problems, we propose and analyze bandit algorithms for both general and structured nonconvex problems with nonstationary (or dynamic) regret as the performance measure, in both stochastic and non-stochastic settings.
no code implementations • 4 Feb 2019 • Abhishek Roy, Lingqing Shen, Krishnakumar Balasubramanian, Saeed Ghadimi
Our theoretical contributions extend the practical applicability of sampling algorithms to the noisy black-box and high-dimensional settings.
no code implementations • NeurIPS 2018 • Krishnakumar Balasubramanian, Saeed Ghadimi
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization.
no code implementations • NeurIPS 2018 • Krishnakumar Balasubramanian, Saeed Ghadimi
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting and saddle-point avoiding.
no code implementations • 29 Aug 2015 • Saeed Ghadimi, Guanghui Lan, Hongchao Zhang
In a similar vein, we show that some well-studied techniques for nonlinear programming, e. g., Quasi-Newton iteration, can be embedded into optimal convex optimization algorithms to possibly further enhance their numerical performance.
1 code implementation • 14 Oct 2013 • Saeed Ghadimi, Guanghui Lan
We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method.
Optimization and Control
no code implementations • 22 Sep 2013 • Saeed Ghadimi, Guanghui Lan
In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems.