Variational Regularization in Inverse Problems and Machine Learning

8 Dec 2021  ·  Martin Burger ·

This paper discusses basic results and recent developments on variational regularization methods, as developed for inverse problems. In a typical setup we review basic properties needed to obtain a convergent regularization scheme and further discuss the derivation of quantitative estimates respectively needed ingredients such as Bregman distances for convex functionals. In addition to the approach developed for inverse problems we will also discuss variational regularization in machine learning and work out some connections to the classical regularization theory. In particular we will discuss a reinterpretation of machine learning problems in the framework of regularization theory and a reinterpretation of variational methods for inverse problems in the framework of risk minimization. Moreover, we establish some previously unknown connections between error estimates in Bregman distances and generalization errors.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here