Improved Convergence Rate for a Distributed Two-Time-Scale Gradient Method under Random Quantization

28 May 2021  ·  Marcos M. Vasconcelos, Thinh T. Doan, Urbashi Mitra ·

We study the so-called distributed two-time-scale gradient method for solving convex optimization problems over a network of agents when the communication bandwidth between the nodes is limited, and so information that is exchanged between the nodes must be quantized. Our main contribution is to provide a novel analysis, resulting to an improved convergence rate of this method as compared to the existing works. In particular, we show that the method converges at a rate $O(log_2 k/\sqrt k)$ to the optimal solution, when the underlying objective function is strongly convex and smooth. The key technique in our analysis is to consider a Lyapunov function that simultaneously captures the coupling of the consensus and optimality errors generated by the method.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here