no code implementations • 8 Mar 2024 • Xun Tang, Holakou Rahmanian, Michael Shavlovsky, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying
We derive the corresponding entropy regularization formulation and introduce a Sinkhorn-type algorithm for such constrained OT problems supported by theoretical guarantees.
no code implementations • 20 Jan 2024 • Xun Tang, Michael Shavlovsky, Holakou Rahmanian, Elisa Tardini, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying
To achieve possibly super-exponential convergence, we present Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine.
no code implementations • 22 Nov 2023 • Yinuo Ren, Tesi Xiao, Tanmay Gangwani, Anshuka Rangi, Holakou Rahmanian, Lexing Ying, Subhajit Sanyal
Multi-objective optimization (MOO) aims to optimize multiple, possibly conflicting objectives with widespread applications.
no code implementations • 21 Jun 2023 • Xuxing Chen, Tesi Xiao, Krishnakumar Balasubramanian
In this paper, we introduce a novel fully single-loop and Hessian-inversion-free algorithmic framework for stochastic bilevel optimization and present a tighter analysis under standard smoothness assumptions (first-order Lipschitzness of the UL function and second-order Lipschitzness of the LL function).
1 code implementation • 20 Feb 2023 • Tesi Xiao, Xuxing Chen, Krishnakumar Balasubramanian, Saeed Ghadimi
We focus on decentralized stochastic non-convex optimization, where $n$ agents work together to optimize a composite objective function which is a sum of a smooth term and a non-smooth convex term.
no code implementations • 17 Aug 2022 • Tesi Xiao, Xia Xiao, Ming Chen, Youlong Chen
However, most existing NAS-based works suffer from expensive computational costs, the curse of dimensionality of the search space, and the discrepancy between continuous search space and discrete candidate space.
no code implementations • 9 Feb 2022 • Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi
We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set.
no code implementations • 10 Feb 2021 • Yanhao Jin, Tesi Xiao, Krishnakumar Balasubramanian
Statistical machine learning models trained with stochastic gradient algorithms are increasingly being deployed in critical scientific applications.
no code implementations • 15 Jun 2020 • Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi
We analyze stochastic conditional gradient methods for constrained optimization problems arising in over-parametrized machine learning.
1 code implementation • 5 Jun 2019 • Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh
In this paper, we propose a new continuous neural network framework called Neural Stochastic Differential Equation (Neural SDE) network, which naturally incorporates various commonly used regularization mechanisms based on random noise injection.