Search Results for author: Qi Long

Found 17 papers, 6 papers with code

MISNN: Multiple Imputation via Semi-parametric Neural Networks

no code implementations2 May 2023 Zhiqi Bu, Zongyu Dai, Yiliang Zhang, Qi Long

Multiple imputation (MI) has been widely applied to missing value problems in biomedical, social and econometric research, in order to avoid improper inference in the downstream data analysis.

feature selection Imputation +1

Multiple Imputation with Neural Network Gaussian Process for High-dimensional Incomplete Data

1 code implementation23 Nov 2022 Zongyu Dai, Zhiqi Bu, Qi Long

Single imputation methods such as matrix completion methods do not adequately account for imputation uncertainty and hence would yield improper statistical inference.

Imputation Matrix Completion

CEDAR: Communication Efficient Distributed Analysis for Regressions

no code implementations1 Jul 2022 Changgee Chang, Zhiqi Bu, Qi Long

We provide theoretical investigation for the asymptotic properties of the proposed method for statistical inference as well as differential privacy, and evaluate its performance in simulations and real data analyses in comparison with several recently developed methods.

Covariate-Balancing-Aware Interpretable Deep Learning models for Treatment Effect Estimation

no code implementations7 Mar 2022 Kan Chen, Qishuo Yin, Qi Long

Motivated by the theoretical analysis, we propose a novel objective function for estimating the ATE that uses the energy distance balancing score and hence does not require correct specification of the propensity score model.

Additive models Causal Inference

Multiple Imputation via Generative Adversarial Network for High-dimensional Blockwise Missing Value Problems

no code implementations21 Dec 2021 Zongyu Dai, Zhiqi Bu, Qi Long

Missing data are present in most real world problems and need careful handling to preserve the prediction accuracy and statistical consistency in the downstream analysis.

Generative Adversarial Network Imputation

Assessing Fairness in the Presence of Missing Data

no code implementations NeurIPS 2021 Yiliang Zhang, Qi Long

When the goal is to develop a fair algorithm in the complete data domain where there are no missing values, an algorithm that is fair in the complete case domain may show disproportionate bias towards some marginalized groups in the complete data domain.

Fairness

Fairness in Missing Data Imputation

no code implementations22 Oct 2021 Yiliang Zhang, Qi Long

Missing data are ubiquitous in the era of big data and, if inadequately handled, are known to lead to biased findings and have deleterious impact on data-driven decision makings.

Fairness Imputation

Differentially Private Bayesian Neural Networks on Accuracy, Privacy and Reliability

no code implementations18 Jul 2021 Qiyiwen Zhang, Zhiqi Bu, Kan Chen, Qi Long

Interestingly, we show a new equivalence between DP-SGD and DP-SGLD, implying that some non-Bayesian DP training naturally allows for uncertainty quantification.

Uncertainty Quantification

On the Convergence and Calibration of Deep Learning with Differential Privacy

1 code implementation15 Jun 2021 Zhiqi Bu, Hua Wang, Zongyu Dai, Qi Long

Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence (and thus lower accuracy), as well as more severe mis-calibration than its non-private counterpart.

A Theorem of the Alternative for Personalized Federated Learning

no code implementations2 Mar 2021 Shuxiao Chen, Qinqing Zheng, Qi Long, Weijie J. Su

A widely recognized difficulty in federated learning arises from the statistical heterogeneity among clients: local datasets often come from different but not entirely unrelated distributions, and personalization is, therefore, necessary to achieve optimal results from each individual's perspective.

Personalized Federated Learning

Federated $f$-Differential Privacy

1 code implementation22 Feb 2021 Qinqing Zheng, Shuxiao Chen, Qi Long, Weijie J. Su

Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information without compromising much on the privacy of their local sensitive data.

Federated Learning

Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training

1 code implementation29 Jan 2021 Cong Fang, Hangfeng He, Qi Long, Weijie J. Su

More importantly, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto unknown phenomenon that we term \textit{Minority Collapse}, which fundamentally limits the performance of deep learning models on the minority classes.

Fairness guarantee in analysis of incomplete data

no code implementations1 Jan 2021 Yiliang Zhang, Qi Long

While there is a growing body of literature on fairness in analysis of fully observed data, there has been little work on investigating fairness in analysis of incomplete data when the goal is to develop a fair algorithm in the complete data domain where there are no missing values.

Fairness

Grouping effects of sparse CCA models in variable selection

no code implementations7 Aug 2020 Kefei Liu, Qi Long, Li Shen

The sparse canonical correlation analysis (SCCA) is a bi-multivariate association model that finds sparse linear combinations of two sets of variables that are maximally correlated with each other.

Variable Selection

Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion

1 code implementation ICML 2020 Qinqing Zheng, Jinshuo Dong, Qi Long, Weijie J. Su

To address this question, we introduce a family of analytical and sharp privacy bounds under composition using the Edgeworth expansion in the framework of the recently proposed f-differential privacy.

Deep Learning with Gaussian Differential Privacy

3 code implementations26 Nov 2019 Zhiqi Bu, Jinshuo Dong, Qi Long, Weijie J. Su

Leveraging the appealing properties of $f$-differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [3] did.

General Classification Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.