Auto-encoders for compressed sensing

Compressed sensing is about recovering a structured high-dimensional signal ${\bf x}\in R^n$ from its under-determined noisy linear measurements ${\bf y}\in R^m$, where $m\ll n$. While the vast majority of the literature in this area is on sparse signals, in recent years, there has been considerable progress on compressed sensing of signals with structures beyond sparsity. One of the promising approaches in this field is to employ generative models that are based on trained neural networks. In this paper, we study the performance of an iterative algorithm based on projected gradient descent that employs an auto-encoder to define and enforce the source structure. The auto-encoder is defined by a generative function $g:R^k\rightarrow R^n$ and a separate neural network that is trained to function as the inverse of $g$. We prove that, for a generative model $g$ with $\ell_2$ representation error $\delta$, given roughly $m>40k\log{1\over \delta}$ measurements, such an algorithm converges, even in the presence of additive white Gaussian noise.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here