no code implementations • 26 May 2023 • Puyu Wang, Yunwen Lei, Di Wang, Yiming Ying, Ding-Xuan Zhou
This sheds light on sufficient or necessary conditions for under-parameterized and over-parameterized NNs trained by GD to attain the desired risk rate of $O(1/\sqrt{n})$.
no code implementations • 16 Sep 2022 • Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou
To the best of our knowledge, this is the first generalization analysis of SGMs when the gradients are sampled from a Markov process.
no code implementations • 9 Sep 2022 • Puyu Wang, Yunwen Lei, Yiming Ying, Ding-Xuan Zhou
In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization.
no code implementations • NeurIPS 2021 • Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying
A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.
1 code implementation • 23 Nov 2021 • Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying
A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.
no code implementations • 17 Aug 2021 • Puyu Wang, Liang Wu, Yunwen Lei
Randomized coordinate descent (RCD) is a popular optimization algorithm with wide applications in solving various machine learning problems, which motivates a lot of theoretical analysis on its convergence behavior.
no code implementations • 22 Jan 2021 • Puyu Wang, Yunwen Lei, Yiming Ying, Hai Zhang
We significantly relax these restrictive assumptions and establish privacy and generalization (utility) guarantees for private SGD algorithms using output and gradient perturbations associated with non-smooth convex losses.
no code implementations • 2 Aug 2019 • Puyu Wang, Hai Zhang
By the property of the post-processing holding of differential privacy, the proposed approach satisfies the $\epsilon-$differential privacy even when the original problem is unstable.