no code implementations • 28 Feb 2023 • Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy Lê Nguyen
Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution.
no code implementations • 29 Sep 2022 • Zijian Liu, Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, Huy L. Nguyen
There, STORM utilizes recursive momentum to achieve the VR effect and is then later made fully adaptive in STORM+ [Levy et al., '21], where full-adaptivity removes the requirement for obtaining certain problem-specific parameters such as the smoothness of the objective and bounds on the variance and norm of the stochastic gradients in order to set the step size.
no code implementations • 29 Sep 2022 • Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen
Finally, we give new accelerated adaptive algorithms and their convergence guarantee in the deterministic setting with explicit dependency on the problem parameters, improving upon the asymptotic rate shown in previous works.
no code implementations • 28 Jan 2022 • Zijian Liu, Ta Duy Nguyen, Alina Ene, Huy L. Nguyen
To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG).
1 code implementation • 15 Jun 2020 • Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, Reza Shokri
Optimizing prediction accuracy can come at the expense of fairness.
2 code implementations • 9 Mar 2017 • Benjamin Doerr, Huu Phuoc Le, Régis Makhmara, Ta Duy Nguyen
We prove that the $(1+1)$ EA with this heavy-tailed mutation rate optimizes any $\jump_{m, n}$ function in a time that is only a small polynomial (in~$m$) factor above the one stemming from the optimal rate for this $m$.