no code implementations • 22 Apr 2022 • Yilin Kang, Yong liu, Jian Li, Weiping Wang
In this paper, by introducing Generalized Bernstein condition, we propose the first $\mathcal{O}\big(\frac{\sqrt{p}}{n\epsilon}\big)$ high probability excess population risk bound for differentially private algorithms under the assumptions $G$-Lipschitz, $L$-smooth, and Polyak-{\L}ojasiewicz condition, based on gradient perturbation method.
no code implementations • 11 Apr 2022 • Yilin Kang, Yong liu, Jian Li, Weiping Wang
To the best of our knowledge, this is the first time to analyze the generalization performance of general minimax paradigm, taking differential privacy into account.
no code implementations • 7 May 2021 • Yilin Kang, Yong liu, Jian Li, Weiping Wang
Pairwise learning focuses on learning tasks with pairwise loss functions, depends on pairs of training instances, and naturally fits for modeling relationships between pairs of samples.
no code implementations • 20 Feb 2020 • Yilin Kang, Jian Li, Yong liu, Weiping Wang
Traditionally, the random noise is equally injected when training with different data instances in the field of differential privacy (DP).
1 code implementation • 20 Feb 2020 • Yilin Kang, Yong liu, Ben Niu, Xin-Yi Tong, Likun Zhang, Weiping Wang
By adding noise to the original training data and training with the `perturbed data', we achieve ($\epsilon$,$\delta$)-differential privacy on the final model, along with some kind of privacy on the original data.
no code implementations • 23 Oct 2019 • Yilin Kang, Yong liu, Weiping Wang
By detailed theoretical analysis, we show that in distributed setting, the noise bound and the excess empirical risk bound can be improved by considering different weights held by multiple parties.