Fast and Sample-Efficient Federated Low Rank Matrix Recovery from column-wise Linear and Quadratic Projections

20 Feb 2021  ·  Seyedehsara, Nayer, Namrata Vaswani ·

We study the following lesser-known low rank (LR) recovery problem: recover an $n \times q$ rank-$r$ matrix, $X^* =[x^*_1 , x^*_2, ..., x^*_q]$, with $r \ll \min(n,q)$, from $m$ independent linear projections of each of its $q$ columns, i.e., from $y_k := A_k x^*_k , k \in [q]$, when $y_k$ is an $m$-length vector with $m < n$. The matrices $A_k$ are known and mutually independent for different $k$. We introduce a novel gradient descent (GD) based solution called AltGD-Min. We show that, if the $A_k$s are i.i.d. with i.i.d. Gaussian entries, and if the right singular vectors of $X^*$ satisfy the incoherence assumption, then $\epsilon$-accurate recovery of $X^*$ is possible with order $(n+q) r^2 \log(1/\epsilon)$ total samples and order $ mq nr \log (1/\epsilon)$ time. Compared with existing work, this is the fastest solution. For $\epsilon < r^{1/4}$, it also has the best sample complexity. A simple extension of AltGD-Min also provably solves LR Phase Retrieval, which is a magnitude-only generalization of the above problem. AltGD-Min factorizes the unknown $X$ as $X = UB$ where $U$ and $B$ are matrices with $r$ columns and rows respectively. It alternates between a (projected) GD step for updating $U$, and a minimization step for updating $B$. Its each iteration is as fast as that of regular projected GD because the minimization over $B$ decouples column-wise. At the same time, we can prove exponential error decay for it, which we are unable to for projected GD. Finally, it can also be efficiently federated with a communication cost of only $nr$ per node, instead of $nq$ for projected GD.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Information Theory Information Theory

Datasets


  Add Datasets introduced or used in this paper