no code implementations • 23 Feb 2024 • Zhuojun Quan, Yuanyuan Lin, Kani Chen, Wen Yu
We find out that with the availability of the unlabeled data, the intercept parameter can be identified in semi-supervised learning setting.
no code implementations • 27 Jun 2023 • Shanshan Song, Tong Wang, Guohao Shen, Yuanyuan Lin, Jian Huang
Our approach simultaneously estimates a regression function and a conditional generator using a generative learning framework, where a conditional generator is a function that can generate samples from a conditional distribution.
no code implementations • 1 May 2023 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang
We establish error bounds for simultaneously approximating $C^s$ smooth functions and their derivatives using RePU-activated deep neural networks.
no code implementations • 18 Oct 2022 • Wenlu Tang, Guohao Shen, Yuanyuan Lin, Jian Huang
We also derive non-asymptotic upper bounds for the difference of the lengths between the proposed non-crossing conformal prediction interval and the theoretically oracle prediction interval.
no code implementations • 21 Jul 2022 • Siming Zheng, Yuanyuan Lin, Jian Huang
We propose a mutual information-based sufficient representation learning (MSRL) approach, which uses the variational formulation of the mutual information and leverages the approximation power of deep neural networks.
no code implementations • 21 Jul 2022 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Joel L. Horowitz, Jian Huang
We propose a penalized nonparametric approach to estimating the quantile regression process (QRP) in a nonseparable model using rectifier quadratic unit (ReQU) activated deep neural networks and introduce a novel penalty function to enforce non-crossing of quantile regression curves.
no code implementations • 1 May 2021 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang
To establish these results, we derive an upper bound for the covering number for the class of general convolutional neural networks with a bias term in each convolutional layer, and derive new results on the approximation power of CNNs for any uniformly-continuous target functions.