no code implementations • 5 Mar 2024 • Trang H. Tran, Quoc Tran-Dinh, Lam M. Nguyen
The Stochastic Gradient Descent method (SGD) and its stochastic variants have become methods of choice for solving finite-sum optimization problems arising from machine learning and data science thanks to their ability to handle large-scale applications and big datasets.
no code implementations • 30 Mar 2023 • Quoc Tran-Dinh
The extragradient (EG), introduced by G. M. Korpelevich in 1976, is a well-known method to approximate solutions of saddle-point problems and their extensions such as variational inequalities and monotone inclusions.
no code implementations • 8 Feb 2023 • Quoc Tran-Dinh
We develop two "Nesterov's accelerated" variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued.
no code implementations • 8 Jan 2023 • Quoc Tran-Dinh
In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems.
no code implementations • 19 Dec 2022 • Quoc Tran-Dinh, Marten van Dijk
In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants.
no code implementations • 15 Oct 2021 • Quoc Tran-Dinh, Yang Luo
In this paper, we develop a new type of accelerated algorithms to solve some classes of maximally monotone equations as well as monotone inclusions.
1 code implementation • 5 Mar 2021 • Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen
These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity.
no code implementations • 24 Nov 2020 • Trang H. Tran, Lam M. Nguyen, Quoc Tran-Dinh
When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions.
no code implementations • 20 Nov 2020 • Matilde Gargiani, Andrea Zanelli, Quoc Tran-Dinh, Moritz Diehl, Frank Hutter
In this work, we present a first-order stochastic algorithm based on a combination of homotopy methods and SGD, called Homotopy-Stochastic Gradient Descent (H-SGD), which finds interesting connections with some proposed heuristics in the literature, e. g. optimization by Gaussian continuation, training by diffusion, mollifying networks.
no code implementations • 27 Oct 2020 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen
We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides.
no code implementations • 20 Aug 2020 • Deyi Liu, Lam M. Nguyen, Quoc Tran-Dinh
In this note we propose a new variant of the hybrid variance-reduced proximal gradient method in [7] to solve a common stochastic composite nonconvex optimization problem under standard assumptions.
no code implementations • 17 Jul 2020 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen
The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication.
no code implementations • NeurIPS 2020 • Quoc Tran-Dinh, Deyi Liu, Lam M. Nguyen
This problem class has several computational challenges due to its nonsmoothness, nonconvexity, nonlinearity, and non-separability of the objective functions.
no code implementations • 3 Mar 2020 • Quoc Tran-Dinh, Deyi Liu
We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature.
1 code implementation • 1 Mar 2020 • Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk, Quoc Tran-Dinh
We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization.
no code implementations • 19 Feb 2020 • Lam M. Nguyen, Quoc Tran-Dinh, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk
We also study uniformly randomized shuffling variants with different learning rates and model assumptions.
1 code implementation • ICML 2020 • Quoc Tran-Dinh, Nhan H. Pham, Lam M. Nguyen
In the expectation case, we establish $\mathcal{O}(\varepsilon^{-2})$ iteration-complexity to achieve a stationary point in expectation and estimate the total number of stochastic oracle calls for both function value and its Jacobian, where $\varepsilon$ is a desired accuracy.
1 code implementation • 17 Feb 2020 • Deyi Liu, Volkan Cevher, Quoc Tran-Dinh
We demonstrate how to scalably solve a class of constrained self-concordant minimization problems using linear minimization oracles (LMO) over the constraint set.
no code implementations • 8 Jul 2019 • Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen
We introduce a new approach to develop stochastic optimization algorithms for a class of stochastic composite and possibly nonconvex optimization problems.
no code implementations • 15 May 2019 • Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen
We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems.
1 code implementation • 13 Mar 2019 • Quoc Tran-Dinh, Yuzixuan Zhu
By adapting the parameters, we can obtain up to $o(\frac{1}{k})$ convergence rate on the primal objective residuals in nonergodic sense.
Optimization and Control 90C25, 90-08
1 code implementation • 15 Feb 2019 • Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Quoc Tran-Dinh
We also specify the algorithm to the non-composite case that covers existing state-of-the-arts in terms of complexity bounds.
no code implementations • 11 Jan 2018 • Dirk A. Lorenz, Quoc Tran-Dinh
This, in turn, proves the convergence of the method with the new adaptive stepsize rule.
no code implementations • NeurIPS 2017 • Ahmet Alacaoglu, Quoc Tran-Dinh, Olivier Fercoq, Volkan Cevher
We propose a new randomized coordinate descent method for a convex optimization template with broad applications.
1 code implementation • 24 Oct 2017 • Yuzixuan, Zhu, Gabor Pataki, Quoc Tran-Dinh
We introduce Sieve-SDP, a simple algorithm to preprocess semidefinite programs (SDPs).
Optimization and Control 90-08, 90C22 (Primary) 90C25, 90C06 ( secondary)
no code implementations • 14 Mar 2017 • Tianxiao Sun, Quoc Tran-Dinh
We also achieve both global and local convergence without additional assumption.
no code implementations • 10 Jun 2016 • Quoc Tran-Dinh
In this paper, we develop a variant of the well-known Gauss-Newton (GN) method to solve a class of nonconvex optimization problems involving low-rank matrix variables.
no code implementations • 21 Mar 2016 • Anastasios Kyrillidis, Bubacarr Bah, Rouzbeh Hasheminezhad, Quoc Tran-Dinh, Luca Baldassarre, Volkan Cevher
Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting -- as opposed to using dense sensing matrices, while showing a competitive recovery performance.
no code implementations • 5 Mar 2016 • Quoc Tran-Dinh, Anastasios Kyrillidis, Volkan Cevher
First, it allows handling non-smooth objectives via proximal operators; this avoids lifting the problem dimension in order to accommodate non-smooth components in optimization.
no code implementations • 1 Sep 2015 • Quoc Tran-Dinh
We propose an adaptive smoothing algorithm based on Nesterov's smoothing technique in \cite{Nesterov2005c} for solving "fully" nonsmooth composite convex optimization problems.
no code implementations • 20 Jul 2015 • Anastasios Kyrillidis, Luca Baldassarre, Marwa El-Halabi, Quoc Tran-Dinh, Volkan Cevher
For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations.
no code implementations • 14 Jul 2015 • Quoc Tran-Dinh, Volkan Cevher
We propose two new alternating direction methods to solve "fully" nonsmooth constrained convex problems.
no code implementations • 4 Feb 2015 • Quoc Tran-Dinh, Yen-Huan Li, Volkan Cevher
The self-concordant-like property of a smooth convex function is a new analytical structure that generalizes the self-concordant notion.
no code implementations • NeurIPS 2014 • Quoc Tran-Dinh, Volkan Cevher
We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization.
no code implementations • 20 Jun 2014 • Quoc Tran-Dinh, Volkan Cevher
Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods.
no code implementations • 13 May 2014 • Anastasios Kyrillidis, Rabeeh Karimi Mahabadi, Quoc Tran-Dinh, Volkan Cevher
We consider the class of convex minimization problems, composed of a self-concordant function, such as the $\log\det$ metric, a convex data fidelity term $h(\cdot)$ and, a regularizing -- possibly non-smooth -- function $g(\cdot)$.
no code implementations • 13 Aug 2013 • Quoc Tran-Dinh, Anastasios Kyrillidis, Volkan Cevher
We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function, endowed with an easily computable proximal operator.