Search Results for author: Dzung T. Phan

Found 14 papers, 4 papers with code

Decentralized Collaborative Learning Framework with External Privacy Leakage Analysis

no code implementations1 Apr 2024 Tsuyoshi Idé, Dzung T. Phan, Rudy Raymond

This paper presents two methodological advancements in decentralized multi-task learning under privacy constraints, aiming to pave the way for future developments in next-generation Blockchain platforms.

Anomaly Detection Dictionary Learning +1

Cardinality-Regularized Hawkes-Granger Model

no code implementations NeurIPS 2021 Tsuyoshi Idé, Georgios Kollias, Dzung T. Phan, Naoki Abe

In this paper, we propose a mathematically well-defined sparse causal learning framework based on a cardinality-regularized Hawkes process, which remedies the pathological issues of existing approaches.

Management Point Processes

FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization

1 code implementation5 Mar 2021 Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity.

Federated Learning

A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees

no code implementations NeurIPS 2020 Haoran Zhu, Pavankumar Murali, Dzung T. Phan, Lam M. Nguyen, Jayant R. Kalagnanam

Several recent publications report advances in training optimal decision trees (ODT) using mixed-integer programs (MIP), due to algorithmic advances in integer programming and a growing interest in addressing the inherent suboptimality of heuristic approaches such as CART.

A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

1 code implementation1 Mar 2020 Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk, Quoc Tran-Dinh

We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization.

reinforcement-learning Reinforcement Learning (RL)

A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization

no code implementations8 Jul 2019 Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

We introduce a new approach to develop stochastic optimization algorithms for a class of stochastic composite and possibly nonconvex optimization problems.

Stochastic Optimization

Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization

no code implementations15 May 2019 Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems.

Stochastic Optimization

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

1 code implementation15 Feb 2019 Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Quoc Tran-Dinh

We also specify the algorithm to the non-composite case that covers existing state-of-the-arts in terms of complexity bounds.

DTN: A Learning Rate Scheme with Convergence Rate of $\mathcal{O}(1/t)$ for SGD

no code implementations22 Jan 2019 Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Marten van Dijk

This paper has some inconsistent results, i. e., we made some failed claims because we did some mistakes for using the test criterion for a series.

LEMMA valid

Finite-Sum Smooth Optimization with SARAH

no code implementations22 Jan 2019 Lam M. Nguyen, Marten van Dijk, Dzung T. Phan, Phuong Ha Nguyen, Tsui-Wei Weng, Jayant R. Kalagnanam

The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function $F(w)=\frac{1}{n} \sum_{i=1}^n f_i(w)$ has been proven to be at least $\Omega(\sqrt{n}/\epsilon)$ for $n \leq \mathcal{O}(\epsilon^{-2})$ where $\epsilon$ denotes the attained accuracy $\mathbb{E}[ \|\nabla F(\tilde{w})\|^2] \leq \epsilon$ for the outputted approximation $\tilde{w}$ (Fang et al., 2018).

Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD

no code implementations9 Oct 2018 Marten van Dijk, Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan

We study Stochastic Gradient Descent (SGD) with diminishing step sizes for convex objective functions.

When Does Stochastic Gradient Algorithm Work Well?

no code implementations18 Jan 2018 Lam M. Nguyen, Nam H. Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Katya Scheinberg

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification.

General Classification regression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.