no code implementations • 5 Dec 2022 • Tengyuan Liang
In particular, we establish two directional convergence results that exhibit distinctive phenomena: (1) a blessing in regression, the adversarial covariate shifts in an exponential rate to an optimal experimental design for rapid subsequent learning, (2) a curse in classification, the adversarial covariate shifts in a subquadratic rate fast to the hardest experimental design trapping subsequent learning.
no code implementations • 9 Apr 2022 • Tengyuan Liang, Subhabrata Sen, Pragya Sur
We provide a "path-wise" characterization of the overlap between the output of the Langevin algorithm and the planted signal.
no code implementations • 9 Feb 2022 • Wenxuan Guo, YoonHaeng Hur, Tengyuan Liang, Christopher Ryan
Motivated by robust dynamic resource allocation in operations research, we study the \textit{Online Learning to Transport} (OLT) problem where the decision variable is a probability measure, an infinite-dimensional object.
no code implementations • 28 Sep 2021 • YoonHaeng Hur, Wenxuan Guo, Tengyuan Liang
Motivated by the seminal work on distance and isomorphism between metric measure spaces, we propose a new notion called the Reversible Gromov-Monge (RGM) distance and study how RGM can be used to design new transform samplers to perform simulation-based inference.
no code implementations • 31 Mar 2021 • Tengyuan Liang
We propose a computationally efficient method to construct nonparametric, heteroscedastic prediction bands for uncertainty quantification, with or without any user-specified predictive model.
no code implementations • 28 Jan 2021 • Tengyuan Liang, Benjamin Recht
Under the assumption that the data is independently and identically distributed, the mistake bound implies that MNIC generalizes at a rate proportional to the norm of the interpolating solution and inversely proportional to the number of data points.
no code implementations • 28 Oct 2020 • Max H. Farrell, Tengyuan Liang, Sanjog Misra
These functions are the key inputs into the finite-dimensional parameter of inferential interest.
no code implementations • 9 Apr 2020 • Tengyuan Liang, Hai Tran-Bach
We utilize a connection between compositional kernels and branching processes via Mehler's formula to study deep neural networks.
no code implementations • 5 Feb 2020 • Tengyuan Liang, Pragya Sur
This paper establishes a precise high-dimensional asymptotic theory for boosting on separable data, taking statistical and computational perspectives.
no code implementations • 2 Nov 2019 • Tengyuan Liang
Curiously, we show that estimating the IPM itself between probability measures, is not significantly easier than estimating the probability measures under the IPM.
no code implementations • 27 Aug 2019 • Tengyuan Liang, Alexander Rakhlin, Xiyu Zhai
We study the risk of minimum-norm interpolants of data in Reproducing Kernel Hilbert Spaces.
no code implementations • 27 Aug 2019 • Tengyuan Liang
We study the minimax optimal rate for estimating the Wasserstein-$1$ metric between two unknown probability measures based on $n$ i. i. d.
no code implementations • 21 Jan 2019 • Xialiang Dou, Tengyuan Liang
The result formalizes the representation and approximation benefits of neural networks.
no code implementations • 7 Nov 2018 • Tengyuan Liang
On the nonparametric end, we derive the optimal minimax rates for distribution estimation under the adversarial framework.
1 code implementation • 26 Sep 2018 • Max H. Farrell, Tengyuan Liang, Sanjog Misra
We establish novel rates of convergence for deep feedforward neural nets.
no code implementations • 1 Aug 2018 • Tengyuan Liang, Alexander Rakhlin
In the absence of explicit regularization, Kernel "Ridgeless" Regression with nonlinear kernels has the potential to fit the training data perfectly.
no code implementations • 18 Feb 2018 • Belinda Tzen, Tengyuan Liang, Maxim Raginsky
For a particular local optimum of the empirical risk, with an arbitrary initialization, we show that, with high probability, at least one of the following two events will occur: (1) the Langevin trajectory ends up somewhere outside the $\varepsilon$-neighborhood of this particular optimum within a short recurrence time; (2) it enters this $\varepsilon$-neighborhood by the recurrence time and stays there until a potentially exponentially long escape time.
no code implementations • 16 Feb 2018 • Tengyuan Liang, James Stokes
Motivated by the pursuit of a systematic computational and algorithmic understanding of Generative Adversarial Networks (GANs), we present a simple yet unified non-asymptotic local convergence theory for smooth two-player games, which subsumes several discrete-time gradient-based saddle point dynamics.
no code implementations • 21 Dec 2017 • Tengyuan Liang
We study in this paper the rate of convergence for learning densities under the Generative Adversarial Networks (GAN) framework, borrowing insights from nonparametric statistics.
no code implementations • 20 Dec 2017 • Tengyuan Liang, Weijie Su
Modern statistical inference tasks often require iterative optimization methods to compute the solution.
1 code implementation • 5 Nov 2017 • Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, James Stokes
We study the relationship between geometry and capacity measures for deep neural networks from an invariance viewpoint.
no code implementations • 12 Sep 2017 • T. Tony Cai, Tengyuan Liang, Alexander Rakhlin
We develop an optimally weighted message passing algorithm to reconstruct labels for SBM based on the minimum energy flow and the eigenvectors of a certain Markov transition matrix.
no code implementations • ICML 2017 • Satyen Kale, Zohar Karnin, Tengyuan Liang, Dávid Pál
Online sparse linear regression is an online problem where an algorithm repeatedly chooses a subset of coordinates to observe in an adversarially chosen feature vector, makes a real-valued prediction, receives the true label, and incurs the squared loss.
no code implementations • 21 Apr 2016 • T. Tony Cai, Tengyuan Liang, Alexander Rakhlin
In this paper, we study detection and fast reconstruction of the celebrated Watts-Strogatz (WS) small-world random graph model \citep{watts1998collective} which aims to describe real-world complex networks that exhibit both high clustering and short average length properties.
no code implementations • 22 Mar 2016 • T. Tony Cai, Tengyuan Liang, Alexander Rakhlin
We study the community detection and recovery problem in partially-labeled stochastic block models (SBM).
no code implementations • 21 Feb 2015 • Tengyuan Liang, Alexander Rakhlin, Karthik Sridharan
We consider regression with square loss and general classes of functions without the boundedness assumption.
no code implementations • 6 Feb 2015 • T. Tony Cai, Tengyuan Liang, Alexander Rakhlin
The second threshold, $\sf SNR_s$, captures the statistical boundary, below which no method can succeed with probability going to one in the minimax sense.
no code implementations • 28 Jan 2015 • Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, Alexander Rakhlin
We consider the problem of optimizing an approximately convex function over a bounded convex set in $\mathbb{R}^n$ using only function evaluations.
no code implementations • 17 Apr 2014 • T. Tony Cai, Tengyuan Liang, Alexander Rakhlin
This paper presents a unified geometric framework for the statistical analysis of a general ill-posed linear inverse model which includes as special cases noisy compressed sensing, sign vector recovery, trace regression, orthogonal matrix estimation, and noisy matrix completion.
no code implementations • 11 Feb 2014 • Tengyuan Liang, Hariharan Narayanan, Alexander Rakhlin
The method is based on a random walk (the \emph{Ball Walk}) on the epigraph of the function.