# Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank Approximation

11 Sep 2020

A common technique for compressing a neural network is to compute the $k$-rank $\ell_2$ approximation $A_{k,2}$ of the matrix $A\in\mathbb{R}^{n\times d}$ that corresponds to a fully connected layer (or embedding layer). Here, $d$ is the number of the neurons in the layer, $n$ is the number in the next one, and $A_{k,2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$... (read more)

PDF Abstract

# Code Add Remove Mark official

No code implementations yet. Submit your code now