Search Results for author: Kim-Chuan Toh

Found 22 papers, 1 papers with code

Developing Lagrangian-based Methods for Nonsmooth Nonconvex Optimization

no code implementations15 Apr 2024 Nachuan Xiao, Kuangyu Ding, Xiaoyin Hu, Kim-Chuan Toh

Preliminary numerical experiments on deep learning tasks illustrate that our proposed framework yields efficient variants of Lagrangian-based methods with convergence guarantees for nonconvex nonsmooth constrained optimization problems.

An Inexact Halpern Iteration with Application to Distributionally Robust Optimization

no code implementations8 Feb 2024 Ling Liang, Kim-Chuan Toh, Jia-Jie Zhu

The Halpern iteration for solving monotone inclusion problems has gained increasing interests in recent years due to its simple form and appealing convergence properties.

On Partial Optimal Transport: Revising the Infeasibility of Sinkhorn and Efficient Gradient Methods

no code implementations21 Dec 2023 Anh Duc Nguyen, Tuan Dung Nguyen, Quang Minh Nguyen, Hoang H. Nguyen, Lam M. Nguyen, Kim-Chuan Toh

This paper studies the Partial Optimal Transport (POT) problem between two unbalanced measures with at most $n$ supports and its applications in various AI tasks such as color transfer or domain adaptation.

Domain Adaptation Point Cloud Registration

Adam-family Methods with Decoupled Weight Decay in Deep Learning

no code implementations13 Oct 2023 Kuangyu Ding, Nachuan Xiao, Kim-Chuan Toh

As a practical application of our proposed framework, we propose a novel Adam-family method named Adam with Decoupled Weight Decay (AdamD), and establish its convergence properties under mild conditions.

Convergence Guarantees for Stochastic Subgradient Methods in Nonsmooth Nonconvex Optimization

no code implementations19 Jul 2023 Nachuan Xiao, Xiaoyin Hu, Kim-Chuan Toh

In this paper, we investigate the convergence properties of the stochastic gradient descent (SGD) method and its variants, especially in training neural networks built from nonsmooth activation functions.

Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning

no code implementations26 Jun 2023 Kuangyu Ding, Jingyang Li, Kim-Chuan Toh

Experimental results on representative benchmarks demonstrate the effectiveness and robustness of MSBPG in training neural networks.

Adam-family Methods for Nonsmooth Optimization with Convergence Guarantees

no code implementations6 May 2023 Nachuan Xiao, Xiaoyin Hu, Xin Liu, Kim-Chuan Toh

In this paper, we present a comprehensive study on the convergence properties of Adam-family methods for nonsmooth optimization, especially in the training of nonsmooth neural networks.

Tractable hierarchies of convex relaxations for polynomial optimization on the nonnegative orthant

no code implementations13 Sep 2022 Ngoc Hoang Anh Mai, Victor Magron, Jean-Bernard Lasserre, Kim-Chuan Toh

We consider polynomial optimization problems (POP) on a semialgebraic set contained in the nonnegative orthant (every POP on a compact set can be put in this format by a simple translation of the origin).

Accelerating nuclear-norm regularized low-rank matrix optimization through Burer-Monteiro decomposition

no code implementations29 Apr 2022 Ching-pei Lee, Ling Liang, Tianyun Tang, Kim-Chuan Toh

This work proposes a rapid algorithm, BM-Global, for nuclear-norm-regularized convex and low-rank matrix optimization problems.

Recommendation Systems

An Inexact Projected Gradient Method with Rounding and Lifting by Nonlinear Programming for Solving Rank-One Semidefinite Relaxation of Polynomial Optimization

1 code implementation28 May 2021 Heng Yang, Ling Liang, Luca Carlone, Kim-Chuan Toh

In particular, we first design a globally convergent inexact projected gradient method (iPGM) for solving the SDP that serves as the backbone of our framework.

Learning Graph Laplacian with MCP

no code implementations22 Oct 2020 Yangjing Zhang, Kim-Chuan Toh, Defeng Sun

We consider the problem of learning a graph under the Laplacian constraint with a non-convex penalty: minimax concave penalty (MCP).

Estimation of sparse Gaussian graphical models with hidden clustering structure

no code implementations17 Apr 2020 Meixia Lin, Defeng Sun, Kim-Chuan Toh, Chengjing Wang

The sparsity and clustering structure of the concentration matrix is enforced to reduce model complexity and describe inherent regularities.

Clustering

Efficient algorithms for multivariate shape-constrained convex regression problems

no code implementations26 Feb 2020 Meixia Lin, Defeng Sun, Kim-Chuan Toh

We prove that the least squares estimator is computable via solving a constrained convex quadratic programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints, where $n$ is the number of data points.

regression

A sparse semismooth Newton based proximal majorization-minimization algorithm for nonconvex square-root-loss regression problems

no code implementations27 Mar 2019 Peipei Tang, Chengjing Wang, Defeng Sun, Kim-Chuan Toh

In this paper, we consider high-dimensional nonconvex square-root-loss regression problems and introduce a proximal majorization-minimization (PMM) algorithm for these problems.

regression

A dual Newton based preconditioned proximal point algorithm for exclusive lasso models

no code implementations1 Feb 2019 Meixia Lin, Defeng Sun, Kim-Chuan Toh, Yancheng Yuan

In addition, we derive the corresponding HS-Jacobian to the proximal mapping and analyze its structure --- which plays an essential role in the efficient computation of the PPA subproblem via applying a semismooth Newton method on its dual.

Convex Clustering: Model, Theoretical Guarantee and Efficient Algorithm

no code implementations4 Oct 2018 Defeng Sun, Kim-Chuan Toh, Yancheng Yuan

The perfect recovery properties of the convex clustering model with uniformly weighted all pairwise-differences regularization have been proved by Zhu et al. (2014) and Panahi et al. (2017).

Clustering

A Fast Globally Linearly Convergent Algorithm for the Computation of Wasserstein Barycenters

no code implementations12 Sep 2018 Lei Yang, Jia Li, Defeng Sun, Kim-Chuan Toh

When the support points of the barycenter are pre-specified, this problem can be modeled as a linear programming (LP) problem whose size can be extremely large.

Efficient sparse semismooth Newton methods for the clustered lasso problem

no code implementations22 Aug 2018 Meixia Lin, Yong-Jin Liu, Defeng Sun, Kim-Chuan Toh

Based on the new formulation, we derive an efficient procedure for its computation.

An Efficient Semismooth Newton Based Algorithm for Convex Clustering

no code implementations ICML 2018 Yancheng Yuan, Defeng Sun, Kim-Chuan Toh

Clustering may be the most fundamental problem in unsupervised learning which is still active in machine learning research because its importance in many applications.

Clustering

Max-Norm Optimization for Robust Matrix Recovery

no code implementations24 Sep 2016 Ethan X. Fang, Han Liu, Kim-Chuan Toh, Wen-Xin Zhou

This paper studies the matrix completion problem under arbitrary sampling schemes.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.