no code implementations • 10 Feb 2023 • Minhui Huang, Dewei Zhang, Kaiyi Ji
However, several important properties in federated learning such as the partial client participation and the linear speedup for convergence (i. e., the convergence rate and complexity are improved linearly with respect to the number of sampled clients) in the presence of non-i. i. d.~datasets, still remain open.
no code implementations • 19 Jul 2022 • Dewei Zhang, Sam Davanloo Tajbakhsh
For two-level composition optimization, we present a Riemannian Stochastic Composition Gradient Descent (R-SCGD) method that finds an approximate stationary point, with expected squared Riemannian gradient smaller than $\epsilon$, in $O(\epsilon^{-2})$ calls to the stochastic gradient oracle of the outer function and stochastic function and gradient oracles of the inner function.