Differentially Private Distributed Stochastic Optimization with Time-Varying Sample Sizes

18 Oct 2023  ·  Jimin Wang, Ji-Feng Zhang ·

Differentially private distributed stochastic optimization has become a hot topic due to the urgent need of privacy protection in distributed stochastic optimization. In this paper, two-time scale stochastic approximation-type algorithms for differentially private distributed stochastic optimization with time-varying sample sizes are proposed using gradient- and output-perturbation methods. For both gradient- and output-perturbation cases, the convergence of the algorithm and differential privacy with a finite cumulative privacy budget $\varepsilon$ for an infinite number of iterations are simultaneously established, which is substantially different from the existing works. By a time-varying sample sizes method, the privacy level is enhanced, and differential privacy with a finite cumulative privacy budget $\varepsilon$ for an infinite number of iterations is established. By properly choosing a Lyapunov function, the algorithm achieves almost-sure and mean-square convergence even when the added privacy noises have an increasing variance. Furthermore, we rigorously provide the mean-square convergence rates of the algorithm and show how the added privacy noise affects the convergence rate of the algorithm. Finally, numerical examples including distributed training on a benchmark machine learning dataset are presented to demonstrate the efficiency and advantages of the algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here