Improved Convergence Rate for a Distributed Two-Time-Scale Gradient Method under Random Quantization
We study the so-called distributed two-time-scale gradient method for solving convex optimization problems over a network of agents when the communication bandwidth between the nodes is limited, and so information that is exchanged between the nodes must be quantized. Our main contribution is to provide a novel analysis, resulting to an improved convergence rate of this method as compared to the existing works. In particular, we show that the method converges at a rate $O(log_2 k/\sqrt k)$ to the optimal solution, when the underlying objective function is strongly convex and smooth. The key technique in our analysis is to consider a Lyapunov function that simultaneously captures the coupling of the consensus and optimality errors generated by the method.
PDF Abstract