no code implementations • 11 Oct 2023 • Liyang Zhu, Meng Ding, Vaneet Aggarwal, Jinhui Xu, Di Wang
To address these issues, we first consider the problem in the $\epsilon$ non-interactive LDP model and provide a lower bound of $\Omega(\frac{\sqrt{dk\log d}}{\sqrt{n}\epsilon})$ on the $\ell_2$-norm estimation error for sub-Gaussian data, where $n$ is the sample size and $d$ is the dimension of the space.
1 code implementation • 23 May 2023 • Yufan Zhou, Ruiyi Zhang, Tong Sun, Jinhui Xu
However, generating images of novel concept provided by the user input image is still a challenging task.
1 code implementation • CVPR 2023 • Yufan Zhou, Bingchen Liu, Yizhe Zhu, Xiao Yang, Changyou Chen, Jinhui Xu
Unlike the baseline diffusion model used in DALL-E 2, our method seamlessly encodes prior knowledge of the pre-trained CLIP model in its diffusion process by designing a new initialization distribution and a new transition step of the diffusion.
Ranked #3 on Text-to-Image Generation on Multi-Modal-CelebA-HQ
no code implementations • 25 Oct 2022 • Yufan Zhou, Chunyuan Li, Changyou Chen, Jianfeng Gao, Jinhui Xu
The low requirement of the proposed method yields high flexibility and usability: it can be beneficial to a wide range of settings, including the few-shot, semi-supervised and fully-supervised learning; it can be applied on different models including generative adversarial networks (GANs) and diffusion models.
no code implementations • 3 Oct 2022 • Meng Ding, Mingxi Lei, Yunwen Lei, Di Wang, Jinhui Xu
In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem.
no code implementations • 17 Sep 2022 • Jinyan Su, Jinhui Xu, Di Wang
In this paper, we study the problem of PAC learning halfspaces in the non-interactive local differential privacy model (NLDP).
no code implementations • 28 Jul 2022 • Chunwei Ma, Zhanghexuan Ji, Ziyun Huang, Yan Shen, Mingchen Gao, Jinhui Xu
Exemplar-free Class-incremental Learning (CIL) is a challenging problem because rehearsing data from previous phases is strictly prohibited, causing catastrophic forgetting of Deep Neural Networks (DNNs).
1 code implementation • 5 Feb 2022 • Chunwei Ma, Ziyun Huang, Mingchen Gao, Jinhui Xu
One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space.
no code implementations • 10 Jan 2022 • Di Wang, Jinhui Xu
Firstly, we study the case where the $\ell_2$ norm of data has bounded second order moment.
no code implementations • CVPR 2022 • Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun
One of the major challenges in training text-to-image generation models is the need of a large number of high-quality text-image pairs.
no code implementations • 7 Dec 2021 • Yufan Zhou, Chunyuan Li, Changyou Chen, Jinhui Xu
With the rapidly growing model complexity and data volume, training deep generative models (DGMs) for better performance has becoming an increasingly more important challenge.
2 code implementations • 27 Nov 2021 • Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun
One of the major challenges in training text-to-image generation models is the need of a large number of high-quality image-text pairs.
Ranked #2 on Text-to-Image Generation on Multi-Modal-CelebA-HQ
no code implementations • ICLR 2022 • Chunwei Ma, Ziyun Huang, Mingchen Gao, Jinhui Xu
One observation is that the widely embraced ProtoNet model is essentially a Dirichlet Tessellation (Voronoi Diagram) in the feature space.
1 code implementation • 25 Jun 2021 • Chunwei Ma, Ziyun Huang, Jiayi Xian, Mingchen Gao, Jinhui Xu
Deep Neural Networks (DNNs), despite their tremendous success in recent years, could still cast doubts on their predictions due to the intrinsic uncertainty associated with their learning process.
no code implementations • 10 May 2021 • Yufan Zhou, Changyou Chen, Jinhui Xu
Learning high-dimensional distributions is an important yet challenging problem in machine learning with applications in various domains.
no code implementations • 7 Feb 2021 • Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
no code implementations • ICLR 2021 • Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
Within this paradigm, we introduce two meta learning algorithms in RKHS, which no longer need an explicit inner-loop adaptation as in the MAML framework.
no code implementations • 11 Nov 2020 • Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu
In our second attempt, we show that for any $1$-Lipschitz generalized linear convex loss function, there is an $(\epsilon, \delta)$-LDP algorithm whose sample complexity for achieving error $\alpha$ is only linear in the dimensionality $p$.
no code implementations • 22 Oct 2020 • Di Wang, Jiahao Ding, Lijie Hu, Zejun Xie, Miao Pan, Jinhui Xu
To address this issue, we propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.
no code implementations • ICML 2020 • Di Wang, Hanshen Xiao, Srini Devadas, Jinhui Xu
For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data.
no code implementations • 19 Oct 2020 • Di Wang, Xiangyu Guo, Shi Li, Jinhui Xu
In this paper, we study the problem of estimating latent variable models with arbitrarily corrupted samples in high dimensional space ({\em i. e.,} $d\gg n$) where the underlying parameter is assumed to be sparse.
no code implementations • 19 Oct 2020 • Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu
To the best of our knowledge, this is the first work that studies and provides theoretical guarantees for the stochastic linear combination of non-linear regressions model.
no code implementations • NeurIPS 2020 • Yufan Zhou, Changyou Chen, Jinhui Xu
Manifold learning is a fundamental problem in machine learning with numerous applications.
no code implementations • 16 May 2020 • Yufan Zhou, Jiayi Xian, Changyou Chen, Jinhui Xu
We then propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
no code implementations • 15 May 2020 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).
no code implementations • 2 Dec 2019 • Yufan Zhou, Changyou Chen, Jinhui Xu
Learning with kernels is an important concept in machine learning.
no code implementations • 1 Oct 2019 • Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu
In the second part of the paper, we extend our idea to the problem of estimating non-linear regressions and show similar results as in GLMs for both multivariate Gaussian and sub-Gaussian cases.
no code implementations • 25 Sep 2019 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.
no code implementations • 18 Jan 2019 • Di Wang, Jinhui Xu
In this paper, we study the problem of estimating the covariance matrix under differential privacy, where the underlying covariance matrix is assumed to be sparse and of high dimensions.
no code implementations • 17 Dec 2018 • Di Wang, Adam Smith, Jinhui Xu
For the case of \emph{generalized linear losses} (such as hinge and logistic losses), we give an LDP algorithm whose sample complexity is only linear in the dimensionality $p$ and quasipolynomial in other terms (the privacy parameters $\epsilon$ and $\delta$, and the desired excess risk $\alpha$).
no code implementations • NeurIPS 2018 • Di Wang, Marco Gaboardi, Jinhui Xu
In this paper, we revisit the Empirical Risk Minimization problem in the non-interactive local model of differential privacy.
no code implementations • 2 Oct 2018 • Hu Ding, Jinhui Xu
To overcome the difficulty caused by the loss of locality, we present in this paper a unified framework, called {\em Peeling-and-Enclosing (PnE)}, to iteratively solve two variants of the constrained clustering problems, {\em constrained $k$-means clustering} ($k$-CMeans) and {\em constrained $k$-median clustering} ($k$-CMedian).
no code implementations • NeurIPS 2017 • Di Wang, Minwei Ye, Jinhui Xu
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings.
no code implementations • NeurIPS 2018 • Di Wang, Marco Gaboardi, Jinhui Xu
In the case of constant or low dimensionality ($p\ll n$), we first show that if the ERM loss function is $(\infty, T)$-smooth, then we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i. e., $\alpha^{-p}$), which answers a question in [smith 2017 interaction].
1 code implementation • 9 Feb 2018 • Di Wang, Jinhui Xu
In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization.
no code implementations • 24 Mar 2016 • Chao Ma, Tianchenghou, Bin Lan, Jinhui Xu, Zhenhua Zhang
Experimental data shows that, DEFE is able to train an ensemble of discriminative feature learners that boosts the overperformance of final prediction.
no code implementations • NeurIPS 2013 • Hu Ding, Ronald Berezney, Jinhui Xu
In this paper, we study the following new variant of prototype learning, called {\em $k$-prototype learning problem for 3D rigid structures}: Given a set of 3D rigid structures, find a set of $k$ rigid structures so that each of them is a prototype for a cluster of the given rigid structures and the total cost (or dissimilarity) is minimized.
no code implementations • CVPR 2013 • Hu Ding, Branislav Stojkovic, Ronald Berezney, Jinhui Xu
In this paper, we introduce a novel algorithmic tool for investigating association patterns of chromosome territories in a population of cells.