Differentially Private Distributed Nonconvex Stochastic Optimization with Quantized Communications

27 Mar 2024  ·  Jialong Chen, Jimin Wang, Ji-Feng Zhang ·

This paper proposes a new distributed nonconvex stochastic optimization algorithm that can achieve privacy protection, communication efficiency and convergence simultaneously. Specifically, each node adds time-varying privacy noises to its local state to avoid information leakage, and then quantizes its noise-perturbed state before transmitting to improve communication efficiency. By employing the subsampling method controlled through the sample-size parameter, the proposed algorithm reduces the impact of privacy noises, and enhances the differential privacy level. When the global cost function satisfies the Polyak-Lojasiewicz condition, the mean and high-probability convergence rate and the oracle complexity of the proposed algorithm are given. Importantly, the proposed algorithm achieves both the mean convergence and a finite cumulative differential privacy budget over infinite iterations as the sample-size goes to infinity. A numerical example of the distributed training on the "MNIST" dataset is given to show the effectiveness of the algorithm.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here