Search Results for author: Trang H. Tran

Found 9 papers, 2 papers with code

Shuffling Momentum Gradient Algorithm for Convex Optimization

no code implementations5 Mar 2024 Trang H. Tran, Quoc Tran-Dinh, Lam M. Nguyen

The Stochastic Gradient Descent method (SGD) and its stochastic variants have become methods of choice for solving finite-sum optimization problems arising from machine learning and data science thanks to their ability to handle large-scale applications and big datasets.

A Supervised Contrastive Learning Pretrain-Finetune Approach for Time Series

no code implementations21 Nov 2023 Trang H. Tran, Lam M. Nguyen, Kyongmin Yeo, Nam Nguyen, Roman Vaculin

Foundation models have recently gained attention within the field of machine learning thanks to its efficiency in broad data processing.

Contrastive Learning Time Series

An End-to-End Time Series Model for Simultaneous Imputation and Forecast

no code implementations1 Jun 2023 Trang H. Tran, Lam M. Nguyen, Kyongmin Yeo, Nam Nguyen, Dzung Phan, Roman Vaculin, Jayant Kalagnanam

Time series forecasting using historical data has been an interesting and challenging topic, especially when the data is corrupted by missing values.

Imputation Time Series +1

Finding Optimal Policy for Queueing Models: New Parameterization

no code implementations21 Jun 2022 Trang H. Tran, Lam M. Nguyen, Katya Scheinberg

In this work, we investigate the optimization aspects of the queueing model as a RL environment and provide insight to learn the optimal policy efficiently.

Navigate reinforcement-learning +1

Nesterov Accelerated Shuffling Gradient Method for Convex Optimization

1 code implementation7 Feb 2022 Trang H. Tran, Katya Scheinberg, Lam M. Nguyen

This rate is better than that of any other shuffling gradient methods in convex regime.

Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

no code implementations7 Feb 2022 Lam M. Nguyen, Trang H. Tran, Marten van Dijk

How and under what assumptions is guaranteed convergence to a \textit{global} minimum possible?

New Perspective on the Global Convergence of Finite-Sum Optimization

no code implementations29 Sep 2021 Lam M. Nguyen, Trang H. Tran, Marten van Dijk

How and under what assumptions is guaranteed convergence to a \textit{global} minimum possible?

SMG: A Shuffling Gradient-Based Method with Momentum

no code implementations24 Nov 2020 Trang H. Tran, Lam M. Nguyen, Quoc Tran-Dinh

When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions.

Cannot find the paper you are looking for? You can Submit a new open access paper.