Search Results for author: Tengyuan Liang

Found 30 papers, 2 papers with code

Blessings and Curses of Covariate Shifts: Adversarial Learning Dynamics, Directional Convergence, and Equilibria

no code implementations5 Dec 2022 Tengyuan Liang

In particular, we establish two directional convergence results that exhibit distinctive phenomena: (1) a blessing in regression, the adversarial covariate shifts in an exponential rate to an optimal experimental design for rapid subsequent learning, (2) a curse in classification, the adversarial covariate shifts in a subquadratic rate fast to the hardest experimental design trapping subsequent learning.

Experimental Design regression

High-dimensional Asymptotics of Langevin Dynamics in Spiked Matrix Models

no code implementations9 Apr 2022 Tengyuan Liang, Subhabrata Sen, Pragya Sur

We provide a "path-wise" characterization of the overlap between the output of the Langevin algorithm and the planted signal.

Vocal Bursts Intensity Prediction

Online Learning to Transport via the Minimal Selection Principle

no code implementations9 Feb 2022 Wenxuan Guo, YoonHaeng Hur, Tengyuan Liang, Christopher Ryan

Motivated by robust dynamic resource allocation in operations research, we study the \textit{Online Learning to Transport} (OLT) problem where the decision variable is a probability measure, an infinite-dimensional object.

Reversible Gromov-Monge Sampler for Simulation-Based Inference

no code implementations28 Sep 2021 YoonHaeng Hur, Wenxuan Guo, Tengyuan Liang

Motivated by the seminal work on distance and isomorphism between metric measure spaces, we propose a new notion called the Reversible Gromov-Monge (RGM) distance and study how RGM can be used to design new transform samplers to perform simulation-based inference.

Universal Prediction Band via Semi-Definite Programming

no code implementations31 Mar 2021 Tengyuan Liang

We propose a computationally efficient method to construct nonparametric, heteroscedastic prediction bands for uncertainty quantification, with or without any user-specified predictive model.

Conformal Prediction Uncertainty Quantification

Interpolating Classifiers Make Few Mistakes

no code implementations28 Jan 2021 Tengyuan Liang, Benjamin Recht

Under the assumption that the data is independently and identically distributed, the mistake bound implies that MNIC generalizes at a rate proportional to the norm of the interpolating solution and inversely proportional to the number of data points.

Deep Learning for Individual Heterogeneity: An Automatic Inference Framework

no code implementations28 Oct 2020 Max H. Farrell, Tengyuan Liang, Sanjog Misra

These functions are the key inputs into the finite-dimensional parameter of inferential interest.

Additive models

Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks

no code implementations9 Apr 2020 Tengyuan Liang, Hai Tran-Bach

We utilize a connection between compositional kernels and branching processes via Mehler's formula to study deep neural networks.

Memorization

A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-$\ell_1$-Norm Interpolated Classifiers

no code implementations5 Feb 2020 Tengyuan Liang, Pragya Sur

This paper establishes a precise high-dimensional asymptotic theory for boosting on separable data, taking statistical and computational perspectives.

Estimating Certain Integral Probability Metric (IPM) is as Hard as Estimating under the IPM

no code implementations2 Nov 2019 Tengyuan Liang

Curiously, we show that estimating the IPM itself between probability measures, is not significantly easier than estimating the probability measures under the IPM.

On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels

no code implementations27 Aug 2019 Tengyuan Liang, Alexander Rakhlin, Xiyu Zhai

We study the risk of minimum-norm interpolants of data in Reproducing Kernel Hilbert Spaces.

On the Minimax Optimality of Estimating the Wasserstein Metric

no code implementations27 Aug 2019 Tengyuan Liang

We study the minimax optimal rate for estimating the Wasserstein-$1$ metric between two unknown probability measures based on $n$ i. i. d.

How Well Generative Adversarial Networks Learn Distributions

no code implementations7 Nov 2018 Tengyuan Liang

On the nonparametric end, we derive the optimal minimax rates for distribution estimation under the adversarial framework.

Density Estimation valid

Just Interpolate: Kernel "Ridgeless" Regression Can Generalize

no code implementations1 Aug 2018 Tengyuan Liang, Alexander Rakhlin

In the absence of explicit regularization, Kernel "Ridgeless" Regression with nonlinear kernels has the potential to fit the training data perfectly.

regression

Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability

no code implementations18 Feb 2018 Belinda Tzen, Tengyuan Liang, Maxim Raginsky

For a particular local optimum of the empirical risk, with an arbitrary initialization, we show that, with high probability, at least one of the following two events will occur: (1) the Langevin trajectory ends up somewhere outside the $\varepsilon$-neighborhood of this particular optimum within a short recurrence time; (2) it enters this $\varepsilon$-neighborhood by the recurrence time and stays there until a potentially exponentially long escape time.

Interaction Matters: A Note on Non-asymptotic Local Convergence of Generative Adversarial Networks

no code implementations16 Feb 2018 Tengyuan Liang, James Stokes

Motivated by the pursuit of a systematic computational and algorithmic understanding of Generative Adversarial Networks (GANs), we present a simple yet unified non-asymptotic local convergence theory for smooth two-player games, which subsumes several discrete-time gradient-based saddle point dynamics.

How Well Can Generative Adversarial Networks Learn Densities: A Nonparametric View

no code implementations21 Dec 2017 Tengyuan Liang

We study in this paper the rate of convergence for learning densities under the Generative Adversarial Networks (GAN) framework, borrowing insights from nonparametric statistics.

Generalization Bounds

Fisher-Rao Metric, Geometry, and Complexity of Neural Networks

1 code implementation5 Nov 2017 Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, James Stokes

We study the relationship between geometry and capacity measures for deep neural networks from an invariance viewpoint.

LEMMA

Weighted Message Passing and Minimum Energy Flow for Heterogeneous Stochastic Block Models with Side Information

no code implementations12 Sep 2017 T. Tony Cai, Tengyuan Liang, Alexander Rakhlin

We develop an optimally weighted message passing algorithm to reconstruct labels for SBM based on the minimum energy flow and the eigenvectors of a certain Markov transition matrix.

Community Detection

Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP

no code implementations ICML 2017 Satyen Kale, Zohar Karnin, Tengyuan Liang, Dávid Pál

Online sparse linear regression is an online problem where an algorithm repeatedly chooses a subset of coordinates to observe in an adversarially chosen feature vector, makes a real-valued prediction, receives the true label, and incurs the squared loss.

feature selection regression

On Detection and Structural Reconstruction of Small-World Random Networks

no code implementations21 Apr 2016 T. Tony Cai, Tengyuan Liang, Alexander Rakhlin

In this paper, we study detection and fast reconstruction of the celebrated Watts-Strogatz (WS) small-world random graph model \citep{watts1998collective} which aims to describe real-world complex networks that exhibit both high clustering and short average length properties.

Clustering

Inference via Message Passing on Partially Labeled Stochastic Block Models

no code implementations22 Mar 2016 T. Tony Cai, Tengyuan Liang, Alexander Rakhlin

We study the community detection and recovery problem in partially-labeled stochastic block models (SBM).

Community Detection

Learning with Square Loss: Localization through Offset Rademacher Complexity

no code implementations21 Feb 2015 Tengyuan Liang, Alexander Rakhlin, Karthik Sridharan

We consider regression with square loss and general classes of functions without the boundedness assumption.

regression

Computational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix

no code implementations6 Feb 2015 T. Tony Cai, Tengyuan Liang, Alexander Rakhlin

The second threshold, $\sf SNR_s$, captures the statistical boundary, below which no method can succeed with probability going to one in the minimax sense.

Computational Efficiency

Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions

no code implementations28 Jan 2015 Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, Alexander Rakhlin

We consider the problem of optimizing an approximately convex function over a bounded convex set in $\mathbb{R}^n$ using only function evaluations.

Geometric Inference for General High-Dimensional Linear Inverse Problems

no code implementations17 Apr 2014 T. Tony Cai, Tengyuan Liang, Alexander Rakhlin

This paper presents a unified geometric framework for the statistical analysis of a general ill-posed linear inverse model which includes as special cases noisy compressed sensing, sign vector recovery, trace regression, orthogonal matrix estimation, and noisy matrix completion.

Matrix Completion regression +2

On Zeroth-Order Stochastic Convex Optimization via Random Walks

no code implementations11 Feb 2014 Tengyuan Liang, Hariharan Narayanan, Alexander Rakhlin

The method is based on a random walk (the \emph{Ball Walk}) on the epigraph of the function.

Cannot find the paper you are looking for? You can Submit a new open access paper.