Learning Spectral Regularizations for Linear Inverse Problems

23 Oct 2020  ·  Hartmut Bauermeister, Martin Burger, Michael Moeller ·

One of the main challenges in linear inverse problems is that a majority of such problems are ill-posed in the sense that the solution does not depend on the data continuously. To analyze this effect and reestablish a continuous dependence, classical theory in Hilbert spaces largely relies on the analysis and manipulation of the singular values of the linear operator and its pseudoinverse with the goal of, on the one hand, keeping the singular values of the reconstruction operator bounded, and, on the other hand, approximating the pseudoinverse sufficiently well for a given noise level. While classical regularization methods manipulate the singular values via explicitly defined functions, this paper considers learning such parameter choice rules in such a way, that one obtains higher quality reconstruction results while still remaining in a setting of provably convergent spectral regularization methods. We discuss different ways of parametrizing our spectral regularization methods via neural networks, interpret existing feed forward networks in the setting of spectral regularization which can become provably convergent via an additional projection, and finally demonstrate their superiority in 1d numerical examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here