1 code implementation • 12 Oct 2023 • Qingliang Fan, Zijian Guo, Ziwei Mei, Cun-Hui Zhang
In this paper, we propose new estimation and inference procedures for nonparametric treatment effect functions with endogeneity and potentially high-dimensional covariates.
1 code implementation • NeurIPS 2023 • Licong Lin, Mufang Ying, Suvrojit Ghosh, Koulik Khamaru, Cun-Hui Zhang
Even in linear models, the Ordinary Least Squares (OLS) estimator may fail to exhibit asymptotic normality for single coordinate estimation and have inflated error.
1 code implementation • NeurIPS 2023 • Mufang Ying, Koulik Khamaru, Cun-Hui Zhang
Sequential data collection has emerged as a widely adopted technique for enhancing the efficiency of data gathering processes.
no code implementations • NeurIPS 2021 • Zhixin Zhou, Fan Zhou, Ping Li, Cun-Hui Zhang
We show that the performance of estimating the connectivity matrix $M$ depends on the sparsity of the graph.
no code implementations • 10 Aug 2021 • Yuefeng Han, Cun-Hui Zhang
It is designed to improve the alternating least squares estimator and other forms of the high order orthogonal iteration for tensors with low or moderately high CP ranks, and it is guaranteed to converge rapidly when the error of any given initial estimator is bounded by a small constant.
no code implementations • 8 Jul 2021 • Pierre C Bellec, Yiwei Shen, Cun-Hui Zhang
This paper develops asymptotic normality results for individual coordinates of robust M-estimators with convex penalty in high-dimensions, where the dimension $p$ is at most of the same order as the sample size $n$, i. e, $p/n\le\gamma$ for some fixed constant $\gamma>0$.
no code implementations • NeurIPS 2019 • Ping Li, Xiaoyun Li, Cun-Hui Zhang
Jaccard similarity is widely used as a distance measure in many machine learning and search applications.
no code implementations • 24 Feb 2019 • Pierre C. Bellec, Cun-Hui Zhang
This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso.
no code implementations • 14 Nov 2017 • Dong Xia, Ming Yuan, Cun-Hui Zhang
To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches.
no code implementations • 1 Aug 2016 • Ping Li, Cun-Hui Zhang
We prove the theoretical limit of GMM and the consistency result, assuming that the data follow an elliptical distribution, which is a very general family of distributions and includes the multivariate $t$-distribution as a special case.
no code implementations • 10 Jun 2016 • Ming Yuan, Cun-Hui Zhang
In this paper, we investigate the sample size requirement for a general class of nuclear norm minimization methods for higher order tensor completion.
no code implementations • 11 Aug 2014 • Ping Li, Cun-Hui Zhang
We have developed two estimators: (i) the {\em tie estimator}, and (ii) the {\em absolute minimum estimator}.
no code implementations • 7 May 2014 • Ming Yuan, Cun-Hui Zhang
To establish our results, we develop a series of algebraic and probabilistic techniques such as characterization of subdifferetial for tensor nuclear norm and concentration inequalities for tensor martingales, which may be of independent interests and could be useful in other tensor related problems.
no code implementations • 31 Dec 2013 • Ping Li, Cun-Hui Zhang, Tong Zhang
In this paper, we adopt very sparse Compressed Counting for nonnegative signal recovery.
no code implementations • 3 Oct 2013 • Ping Li, Cun-Hui Zhang, Tong Zhang
In particular, when p->0 the required number of measurements is essentially M=K\log N, where K is the number of nonzero coordinates of the signal.
no code implementations • 24 Sep 2013 • Zhao Ren, Tingni Sun, Cun-Hui Zhang, Harrison H. Zhou
This paper considers a fundamental question: When is it possible to estimate low-dimensional parameters at parametric square-root rate in a large Gaussian graphical model?
no code implementations • NeurIPS 2012 • Ping Li, Art Owen, Cun-Hui Zhang
While minwise hashing is promising for large-scale learning in massive binary data, the preprocessing cost is prohibitive as it requires applying (e. g.,) $k=500$ permutations on the data.
no code implementations • NeurIPS 2012 • Tingni Sun, Cun-Hui Zhang
This paper concerns the problem of matrix completion, which is to estimate a matrix from observations in a small subset of indices.
no code implementations • NeurIPS 2012 • Ping Li, Cun-Hui Zhang
Methods for efficiently estimating the Shannon entropy of data streams have important applications in learning, data mining, and network anomaly detections (e. g., the DDoS attacks).
no code implementations • 29 Apr 2012 • Jiashun Jin, Cun-Hui Zhang, Qi Zhang
Compared to m-variate brute-forth screening that has a computational cost of p^m, the GS only has a computational cost of p (up to some multi-log(p) factors) in screening.
no code implementations • 13 Feb 2012 • Tingni Sun, Cun-Hui Zhang
The penalty level of the scaled Lasso for each column is completely determined by data via convex minimization, without using cross-validation.
2 code implementations • 25 Feb 2010 • Cun-Hui Zhang
We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression.
Statistics Theory Statistics Theory 62J05, 62J07 (Primary) 62H12, 62H25 (Secondary)