Search Results for author: Jinhui Xu

Found 38 papers, 6 papers with code

Improved Analysis of Sparse Linear Regression in Local Differential Privacy Model

no code implementations11 Oct 2023 Liyang Zhu, Meng Ding, Vaneet Aggarwal, Jinhui Xu, Di Wang

To address these issues, we first consider the problem in the $\epsilon$ non-interactive LDP model and provide a lower bound of $\Omega(\frac{\sqrt{dk\log d}}{\sqrt{n}\epsilon})$ on the $\ell_2$-norm estimation error for sub-Gaussian data, where $n$ is the sample size and $d$ is the dimension of the space.

regression

Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach

1 code implementation23 May 2023 Yufan Zhou, Ruiyi Zhang, Tong Sun, Jinhui Xu

However, generating images of novel concept provided by the user input image is still a challenging task.

Text-to-Image Generation

Shifted Diffusion for Text-to-image Generation

1 code implementation CVPR 2023 Yufan Zhou, Bingchen Liu, Yizhe Zhu, Xiao Yang, Changyou Chen, Jinhui Xu

Unlike the baseline diffusion model used in DALL-E 2, our method seamlessly encodes prior knowledge of the pre-trained CLIP model in its diffusion process by designing a new initialization distribution and a new transition step of the diffusion.

Zero-Shot Text-to-Image Generation

Lafite2: Few-shot Text-to-Image Generation

no code implementations25 Oct 2022 Yufan Zhou, Chunyuan Li, Changyou Chen, Jianfeng Gao, Jinhui Xu

The low requirement of the proposed method yields high flexibility and usability: it can be beneficial to a wide range of settings, including the few-shot, semi-supervised and fully-supervised learning; it can be applied on different models including generative adversarial networks (GANs) and diffusion models.

Retrieval Text-to-Image Generation

On Stability and Generalization of Bilevel Optimization Problem

no code implementations3 Oct 2022 Meng Ding, Mingxi Lei, Yunwen Lei, Di Wang, Jinhui Xu

In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem.

Bilevel Optimization Meta-Learning

On PAC Learning Halfspaces in Non-interactive Local Privacy Model with Public Unlabeled Data

no code implementations17 Sep 2022 Jinyan Su, Jinhui Xu, Di Wang

In this paper, we study the problem of PAC learning halfspaces in the non-interactive local differential privacy model (NLDP).

PAC learning Self-Supervised Learning

Progressive Voronoi Diagram Subdivision: Towards A Holistic Geometric Framework for Exemplar-free Class-Incremental Learning

no code implementations28 Jul 2022 Chunwei Ma, Zhanghexuan Ji, Ziyun Huang, Yan Shen, Mingchen Gao, Jinhui Xu

Exemplar-free Class-incremental Learning (CIL) is a challenging problem because rehearsing data from previous phases is strictly prohibited, causing catastrophic forgetting of Deep Neural Networks (DNNs).

Class Incremental Learning Incremental Learning +1

Few-shot Learning as Cluster-induced Voronoi Diagrams: A Geometric Approach

1 code implementation5 Feb 2022 Chunwei Ma, Ziyun Huang, Mingchen Gao, Jinhui Xu

One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space.

Few-Shot Learning

Differentially Private $\ell_1$-norm Linear Regression with Heavy-tailed Data

no code implementations10 Jan 2022 Di Wang, Jinhui Xu

Firstly, we study the case where the $\ell_2$ norm of data has bounded second order moment.

regression

Towards Language-Free Training for Text-to-Image Generation

no code implementations CVPR 2022 Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun

One of the major challenges in training text-to-image generation models is the need of a large number of high-quality text-image pairs.

Zero-Shot Text-to-Image Generation

A Generic Approach for Enhancing GANs by Regularized Latent Optimization

no code implementations7 Dec 2021 Yufan Zhou, Chunyuan Li, Changyou Chen, Jinhui Xu

With the rapidly growing model complexity and data volume, training deep generative models (DGMs) for better performance has becoming an increasingly more important challenge.

Image Inpainting text-guided-image-editing +1

Few-shot Learning via Dirichlet Tessellation Ensemble

no code implementations ICLR 2022 Chunwei Ma, Ziyun Huang, Mingchen Gao, Jinhui Xu

One observation is that the widely embraced ProtoNet model is essentially a Dirichlet Tessellation (Voronoi Diagram) in the feature space.

Few-Shot Learning

Improving Uncertainty Calibration of Deep Neural Networks via Truth Discovery and Geometric Optimization

1 code implementation25 Jun 2021 Chunwei Ma, Ziyun Huang, Jiayi Xian, Mingchen Gao, Jinhui Xu

Deep Neural Networks (DNNs), despite their tremendous success in recent years, could still cast doubts on their predictions due to the intrinsic uncertainty associated with their learning process.

Learning High-Dimensional Distributions with Latent Neural Fokker-Planck Kernels

no code implementations10 May 2021 Yufan Zhou, Changyou Chen, Jinhui Xu

Learning high-dimensional distributions is an important yet challenging problem in machine learning with applications in various domains.

Vocal Bursts Intensity Prediction

Meta-Learning with Neural Tangent Kernels

no code implementations7 Feb 2021 Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu

We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.

Meta-Learning

Meta-Learning in Reproducing Kernel Hilbert Space

no code implementations ICLR 2021 Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu

Within this paradigm, we introduce two meta learning algorithms in RKHS, which no longer need an explicit inner-loop adaptation as in the MAML framework.

Meta-Learning

Empirical Risk Minimization in the Non-interactive Local Model of Differential Privacy

no code implementations11 Nov 2020 Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu

In our second attempt, we show that for any $1$-Lipschitz generalized linear convex loss function, there is an $(\epsilon, \delta)$-LDP algorithm whose sample complexity for achieving error $\alpha$ is only linear in the dimensionality $p$.

Differentially Private (Gradient) Expectation Maximization Algorithm with Statistical Guarantees

no code implementations22 Oct 2020 Di Wang, Jiahao Ding, Lijie Hu, Zejun Xie, Miao Pan, Jinhui Xu

To address this issue, we propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.

On Differentially Private Stochastic Convex Optimization with Heavy-tailed Data

no code implementations ICML 2020 Di Wang, Hanshen Xiao, Srini Devadas, Jinhui Xu

For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data.

Robust High Dimensional Expectation Maximization Algorithm via Trimmed Hard Thresholding

no code implementations19 Oct 2020 Di Wang, Xiangyu Guo, Shi Li, Jinhui Xu

In this paper, we study the problem of estimating latent variable models with arbitrarily corrupted samples in high dimensional space ({\em i. e.,} $d\gg n$) where the underlying parameter is assumed to be sparse.

Vocal Bursts Intensity Prediction

Estimating Stochastic Linear Combination of Non-linear Regressions Efficiently and Scalably

no code implementations19 Oct 2020 Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu

To the best of our knowledge, this is the first work that studies and provides theoretical guarantees for the stochastic linear combination of non-linear regressions model.

LEMMA

Graph Neural Networks with Composite Kernels

no code implementations16 May 2020 Yufan Zhou, Jiayi Xian, Changyou Chen, Jinhui Xu

We then propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.

Graph Attention

Towards Assessment of Randomized Smoothing Mechanisms for Certifying Adversarial Robustness

no code implementations15 May 2020 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).

Adversarial Robustness

Estimating Smooth GLM in Non-interactive Local Differential Privacy Model with Public Unlabeled Data

no code implementations1 Oct 2019 Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu

In the second part of the paper, we extend our idea to the problem of estimating non-linear regressions and show similar results as in GLMs for both multivariate Gaussian and sub-Gaussian cases.

LEMMA

A Unified framework for randomized smoothing based certified defenses

no code implementations25 Sep 2019 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.

Differentially Private High Dimensional Sparse Covariance Matrix Estimation

no code implementations18 Jan 2019 Di Wang, Jinhui Xu

In this paper, we study the problem of estimating the covariance matrix under differential privacy, where the underlying covariance matrix is assumed to be sparse and of high dimensions.

Vocal Bursts Intensity Prediction

Noninteractive Locally Private Learning of Linear Models via Polynomial Approximations

no code implementations17 Dec 2018 Di Wang, Adam Smith, Jinhui Xu

For the case of \emph{generalized linear losses} (such as hinge and logistic losses), we give an LDP algorithm whose sample complexity is only linear in the dimensionality $p$ and quasipolynomial in other terms (the privacy parameters $\epsilon$ and $\delta$, and the desired excess risk $\alpha$).

Empirical Risk Minimization in Non-interactive Local Differential Privacy Revisited

no code implementations NeurIPS 2018 Di Wang, Marco Gaboardi, Jinhui Xu

In this paper, we revisit the Empirical Risk Minimization problem in the non-interactive local model of differential privacy.

A Unified Framework for Clustering Constrained Data without Locality Property

no code implementations2 Oct 2018 Hu Ding, Jinhui Xu

To overcome the difficulty caused by the loss of locality, we present in this paper a unified framework, called {\em Peeling-and-Enclosing (PnE)}, to iteratively solve two variants of the constrained clustering problems, {\em constrained $k$-means clustering} ($k$-CMeans) and {\em constrained $k$-median clustering} ($k$-CMedian).

Constrained Clustering LEMMA

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

no code implementations NeurIPS 2017 Di Wang, Minwei Ye, Jinhui Xu

In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings.

Empirical Risk Minimization in Non-interactive Local Differential Privacy: Efficiency and High Dimensional Case

no code implementations NeurIPS 2018 Di Wang, Marco Gaboardi, Jinhui Xu

In the case of constant or low dimensionality ($p\ll n$), we first show that if the ERM loss function is $(\infty, T)$-smooth, then we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i. e., $\alpha^{-p}$), which answers a question in [smith 2017 interaction].

Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning

1 code implementation9 Feb 2018 Di Wang, Jinhui Xu

In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization.

regression

Deep Extreme Feature Extraction: New MVA Method for Searching Particles in High Energy Physics

no code implementations24 Mar 2016 Chao Ma, Tianchenghou, Bin Lan, Jinhui Xu, Zhenhua Zhang

Experimental data shows that, DEFE is able to train an ensemble of discriminative feature learners that boosts the overperformance of final prediction.

Ensemble Learning

k-Prototype Learning for 3D Rigid Structures

no code implementations NeurIPS 2013 Hu Ding, Ronald Berezney, Jinhui Xu

In this paper, we study the following new variant of prototype learning, called {\em $k$-prototype learning problem for 3D rigid structures}: Given a set of 3D rigid structures, find a set of $k$ rigid structures so that each of them is a prototype for a cluster of the given rigid structures and the total cost (or dissimilarity) is minimized.

Clustering

Gauging Association Patterns of Chromosome Territories via Chromatic Median

no code implementations CVPR 2013 Hu Ding, Branislav Stojkovic, Ronald Berezney, Jinhui Xu

In this paper, we introduce a novel algorithmic tool for investigating association patterns of chromosome territories in a population of cells.

Cannot find the paper you are looking for? You can Submit a new open access paper.