no code implementations • 31 Mar 2024 • Yuan Gao, Jian Huang, Yuling Jiao, Shurong Zheng
We establish non-asymptotic error bounds for the distribution estimator based on CNFs, in terms of the Wasserstein-2 distance.
no code implementations • 26 Feb 2024 • Tong Wang, Jian Huang, Shuangge Ma
Deep networks are increasingly applied to a wide variety of data, including data with high-dimensional predictors.
1 code implementation • 14 Feb 2024 • Huizhi Zhu, Wenxia Xu, Jian Huang, Jiaxin Li
As executed on a GPU, our two-stage method can ensure the requirement for real-time computation.
no code implementations • 9 Dec 2023 • Ding Huang, Jian Huang, Ting Li, Guohao Shen
We propose a conditional stochastic interpolation (CSI) approach to learning conditional distributions.
no code implementations • 20 Nov 2023 • Yuan Gao, Jian Huang, Yuling Jiao
Gaussian denoising has emerged as a powerful principle for constructing simulation-free continuous normalizing flows for generative modeling.
1 code implementation • 1 Nov 2023 • Xingru Huang, Yihao Guo, Jian Huang, Zhi Li, Tianyun Zhang, Kunyan Cai, Gaopeng Huang, WenHao Chen, Zhaoyang Xu, Liangqiong Qu, Ji Hu, Tinyu Wang, Shaowei Jiang, Chenggang Yan, Yaoqi Sun, Xin Ye, Yaqi Wang
Macular hole diagnosis and treatment rely heavily on spatial and quantitative data, yet the scarcity of such data has impeded the progress of deep learning techniques for effective segmentation and real-time 3D reconstruction.
1 code implementation • 13 Oct 2023 • Haoyang Zhang, Yirui Eric Zhou, Yuqi Xue, Yiqi Liu, Jian Huang
Based on this unified GPU memory and storage architecture, G10 utilizes compiler techniques to characterize the tensor behaviors in deep learning workloads.
no code implementations • 2 Sep 2023 • Changyu Liu, Yuling Jiao, Junhui Wang, Jian Huang
For the quadratic loss in nonparametric regression, we show that the adversarial excess risk bound can be improved over those for a general loss.
no code implementations • 27 Jun 2023 • Shanshan Song, Tong Wang, Guohao Shen, Yuanyuan Lin, Jian Huang
Our approach simultaneously estimates a regression function and a conditional generator using a generative learning framework, where a conditional generator is a function that can generate samples from a conditional distribution.
no code implementations • 1 May 2023 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang
We establish error bounds for simultaneously approximating $C^s$ smooth functions and their derivatives using RePU-activated deep neural networks.
no code implementations • 18 Oct 2022 • Wenlu Tang, Guohao Shen, Yuanyuan Lin, Jian Huang
We also derive non-asymptotic upper bounds for the difference of the lengths between the proposed non-crossing conformal prediction interval and the theoretically oracle prediction interval.
no code implementations • 15 Aug 2022 • Zijun Guo, Wenwen Meng, Dongfeng Shi, Linbin Zha, Wei Yang, Jian Huang, Yafeng Chen, Yingjian Wang
When imaging moving objects, single-pixel imaging produces motion blur.
no code implementations • 21 Jul 2022 • Siming Zheng, Yuanyuan Lin, Jian Huang
We propose a mutual information-based sufficient representation learning (MSRL) approach, which uses the variational formulation of the mutual information and leverages the approximation power of deep neural networks.
no code implementations • 21 Jul 2022 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Joel L. Horowitz, Jian Huang
We propose a penalized nonparametric approach to estimating the quantile regression process (QRP) in a nonseparable model using rectifier quadratic unit (ReQU) activated deep neural networks and introduce a novel penalty function to enforce non-crossing of quantile regression curves.
no code implementations • CVPR 2022 • Xi Guo, Wei Wu, Dongliang Wang, Jing Su, Haisheng Su, Weihao Gan, Jian Huang, Qin Yang
In this paper, we take an early step towards video representation learning of human actions with the help of largescale synthetic videos, particularly for human motion representation enhancement.
1 code implementation • 19 Dec 2021 • Shiao Liu, Xingyu Zhou, Yuling Jiao, Jian Huang
The proposed approach uses a conditional generator to transform a known distribution to the target conditional distribution.
no code implementations • NeurIPS 2021 • Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang
Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support.
no code implementations • 17 Oct 2021 • Daixuan Li, Jian Huang
Thanks to the mature manufacturing techniques, solid-state drives (SSDs) are highly customizable for applications today, which brings opportunities to further improve their storage performance and resource utilization.
no code implementations • 6 Oct 2021 • Xingdong Feng, Yuan Gao, Jian Huang, Yuling Jiao, Xu Liu
We propose a relative entropy gradient sampler (REGS) for sampling from unnormalized distributions.
no code implementations • 27 May 2021 • Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang
This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples.
no code implementations • 1 May 2021 • Guohao Shen, Yuling Jiao, Yuanyuan Lin, Jian Huang
To establish these results, we derive an upper bound for the covering number for the class of general convolutional neural networks with a bias term in each convolutional layer, and derive new results on the approximation power of CNNs for any uniformly-continuous target functions.
no code implementations • 5 Jan 2021 • Ying Hu, Jian Huang, Jin-Feng Huang, Qiong-Tao Xie, Jie-Qiao Liao
We study the dynamic sensitivity of the quantum Rabi model, which exhibits quantum criticality in the finite-component-system case.
Quantum Physics
no code implementations • 1 Jan 2021 • Xu Liao, Jin Liu, Tianwen Wen, Yuling Jiao, Jian Huang
At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN).
Ranked #4 on Image Classification on Kuzushiji-MNIST
no code implementations • 1 Jan 2021 • Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu
We provide strong statistical guarantees for the learned representation by establishing an upper bound on the excess error of the objective function and show that it reaches the nonparametric minimax rate under mild conditions.
no code implementations • 17 Dec 2020 • Jin Liu, Yue-Hui Zhou, Jian Huang, Jin-Feng Huang, Jie-Qiao Liao
The realization of multimode optomechanical interactions in the single-photon strong-coupling regime is a desired task in cavity optomechanics, but it remains a challenge in realistic physical systems.
Quantum Physics
no code implementations • 11 Dec 2020 • Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu, Xiliang Lu, Zhijian Yang
The key task in training is the estimation of the density ratios or differences that determine the residual maps.
no code implementations • 30 Oct 2020 • Lubin Meng, Jian Huang, Zhigang Zeng, Xue Jiang, Shan Yu, Tzyy-Ping Jung, Chin-Teng Lin, Ricardo Chavarriaga, Dongrui Wu
Test samples with the backdoor key will then be classified into the target class specified by the attacker.
no code implementations • Interspeech 2020 • Zheng Lian, JianHua Tao, Bin Liu, Jian Huang, Zhanlei Yang, Rongjun Li
Emotion recognition remains a complex task due to speaker variations and low-resource training samples.
Ranked #1 on Speech Emotion Recognition on IEMOCAP (using extra training data)
1 code implementation • 3 Jul 2020 • Dongrui Wu, Xue Jiang, Ruimin Peng, Wanzeng Kong, Jian Huang, Zhigang Zeng
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance.
1 code implementation • 10 Jun 2020 • Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu
We propose a deep dimension reduction approach to learning representations with these characteristics.
no code implementations • 26 May 2020 • Hao Chen, Chang Wang, Jian Huang, Jianxing Gong
Besides, taking advantages of the condition representation and matching mechanism of XCS, the heuristic policies and the opponent model can provide guidance for situations with similar feature representation.
1 code implementation • 21 Mar 2020 • Changming Zhao, Dongrui Wu, Jian Huang, Ye Yuan, Hai-Tao Zhang, Ruimin Peng, Zhenhua Shi
Bootstrap aggregating (Bagging) and boosting are two popular ensemble learning approaches, which combine multiple base learners to generate a composite model for more accurate and more reliable performance.
no code implementations • 7 Feb 2020 • Yuan Gao, Jian Huang, Yuling Jiao, Jin Liu
We then solve the McKean-Vlasov equation numerically using the forward Euler iteration, where the forward Euler map depends on the density ratio (density difference) between the distribution at current iteration and the underlying target distribution.
no code implementations • 27 Jan 2020 • Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu, Yuanyuan Yang
Based on this KKT system, a built-in working set with a relatively small size is first determined using the sum of primal and dual variables generated from the previous iteration, then the primal variable is updated by solving a least-squares problem on the working set and the dual variable updated based on a closed-form expression.
no code implementations • 16 Jan 2020 • Jian Huang, Yuling Jiao, Lican Kang, Jin Liu, Yanyan Liu, Xiliang Lu
Feature selection is important for modeling high-dimensional data, where the number of variables can be much larger than the sample size.
1 code implementation • 9 Jan 2020 • Zhenhua Shi, Dongrui Wu, Jian Huang, Yu-Kai Wang, Chin-Teng Lin
Approaches that preserve only the local data structure, such as locality preserving projections, are usually unsupervised (and hence cannot use label information) and uses a fixed similarity graph.
no code implementations • 24 Oct 2019 • Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang
The secondary task is to learn a common representation where speaker identities can not be distinguished.
no code implementations • 24 Oct 2019 • Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang
Prior works on speech emotion recognition utilize various unsupervised learning approaches to deal with low-resource samples.
no code implementations • 24 Oct 2019 • Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang
Different from the emotion recognition in individual utterances, we propose a multimodal learning framework using relation and dependencies among the utterances for conversational emotion analysis.
no code implementations • 23 Oct 2019 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang, Ming-Yue Niu
To sum up, the contributions of this paper lie in two areas: 1) We visualize concerned areas of human faces in emotion recognition; 2) We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.
no code implementations • 23 Oct 2019 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang
It outperforms the baseline system that is optimized without the contrastive loss function with 1. 14% and 2. 55% in the weighted accuracy and the unweighted accuracy, respectively.
1 code implementation • 1 Aug 2019 • Yuqi Cui, Jian Huang, Dongrui Wu
Takagi-Sugeno-Kang (TSK) fuzzy systems are flexible and interpretable machine learning models; however, they may not be easily optimized when the data size is large, and/or the data dimensionality is high.
no code implementations • 25 Mar 2019 • Dongrui Wu, Chin-Teng Lin, Jian Huang, Zhigang Zeng
Fuzzy systems have achieved great success in numerous applications.
no code implementations • 11 Nov 2018 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang
I have submitted a new version to arXiv:1910. 13806.
no code implementations • 9 Oct 2018 • Jian Huang, Yuling Jiao, Xiliang Lu, Yueyong Shi, Qinglong Yang
We propose a semismooth Newton algorithm for pathwise optimization (SNAP) for the LASSO and Enet in sparse, high-dimensional linear regression.
1 code implementation • 13 Sep 2018 • Zheng Lian, Ya Li, Jian-Hua Tao, Jian Huang
We test our method in the EmotiW 2018 challenge and we gain promising results.
1 code implementation • 8 Aug 2018 • Dongrui Wu, Chin-Teng Lin, Jian Huang
Active learning for regression (ALR) is a methodology to reduce the number of labeled samples, by selecting the most beneficial ones to label, instead of random selection.
no code implementations • 8 Aug 2018 • Dongrui Wu, Jian Huang
Acquisition of labeled training samples for affective computing is usually costly and time-consuming, as affects are intrinsically subjective, subtle and uncertain, and hence multiple human assessors are needed to evaluate each affective sample.
no code implementations • 18 Oct 2017 • Mohammad Raji, Alok Hota, Robert Sisneros, Peter Messmer, Jian Huang
In this work, we pose the question of whether, by considering qualitative information such as a sample target image as input, one can produce a rendered image of scientific data that is similar to the target.
no code implementations • 9 Sep 2015 • Congrui Yi, Jian Huang
We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings.
no code implementations • 4 Oct 2013 • Jian Huang, Yuling Jiao, Bangti Jin, Jin Liu, Xiliang Lu, Can Yang
In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations.
no code implementations • 10 Sep 2012 • Patrick Breheny, Jian Huang
Penalized regression is an attractive framework for variable selection problems.
no code implementations • 31 Mar 2009 • Huiliang Xie, Jian Huang
We consider the problem of simultaneous variable selection and estimation in partially linear models with a divergent number of covariates in the linear part, under the assumption that the vector of regression coefficients is sparse.
Statistics Theory Statistics Theory 62J05, 62G08 (Primary) 62E20 (Secondary)