Search Results for author: Thomas Möllenhoff

Found 18 papers, 8 papers with code

Conformal Prediction via Regression-as-Classification

no code implementations12 Apr 2024 Etash Guha, Shlok Natarajan, Thomas Möllenhoff, Mohammad Emtiyaz Khan, Eugene Ndiaye

Conformal prediction (CP) for regression can be challenging, especially when the output distribution is heteroscedastic, multimodal, or skewed.

Classification Conformal Prediction +1

Variational Learning is Effective for Large Deep Networks

1 code implementation27 Feb 2024 Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Clement Bazan, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan, Thomas Möllenhoff

We give extensive empirical evidence against the common belief that variational learning is ineffective for large neural networks.

The Memory Perturbation Equation: Understanding Model's Sensitivity to Data

1 code implementation30 Oct 2023 Peter Nickl, Lu Xu, Dharmesh Tailor, Thomas Möllenhoff, Mohammad Emtiyaz Khan

Understanding model's sensitivity to its training data is crucial but can also be challenging and costly, especially during training.

Model Merging by Uncertainty-Based Gradient Matching

no code implementations19 Oct 2023 Nico Daheim, Thomas Möllenhoff, Edoardo Maria Ponti, Iryna Gurevych, Mohammad Emtiyaz Khan

Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail?

The Lie-Group Bayesian Learning Rule

no code implementations8 Mar 2023 Eren Mehmet Kıral, Thomas Möllenhoff, Mohammad Emtiyaz Khan

This simplifies all three difficulties for many cases, providing flexible parametrizations through group's action, simple gradient computation through reparameterization, and updates that always stay on the manifold.

SAM as an Optimal Relaxation of Bayes

1 code implementation4 Oct 2022 Thomas Möllenhoff, Mohammad Emtiyaz Khan

Sharpness-aware minimization (SAM) and related adversarial deep-learning methods can drastically improve generalization, but their underlying mechanisms are not yet fully understood.

Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable Approach for Continuous Markov Random Fields

no code implementations13 Jul 2021 Hartmut Bauermeister, Emanuel Laude, Thomas Möllenhoff, Michael Moeller, Daniel Cremers

In contrast to existing discretizations which suffer from a grid bias, we show that a piecewise polynomial discretization better preserves the continuous nature of our problem.

Stereo Matching

Optimization of Graph Total Variation via Active-Set-based Combinatorial Reconditioning

1 code implementation27 Feb 2020 Zhenzhang Ye, Thomas Möllenhoff, Tao Wu, Daniel Cremers

Structured convex optimization on weighted graphs finds numerous applications in machine learning and computer vision.

Informative GANs via Structured Regularization of Optimal Transport

no code implementations4 Dec 2019 Pierre Bréchet, Tao Wu, Thomas Möllenhoff, Daniel Cremers

We tackle the challenge of disentangled representation learning in generative adversarial networks (GANs) from the perspective of regularized optimal transport (OT).

Representation Learning

Flat Metric Minimization with Applications in Generative Modeling

1 code implementation12 May 2019 Thomas Möllenhoff, Daniel Cremers

We take the novel perspective to view data not as a probability distribution but rather as a current.

Controlling Neural Networks via Energy Dissipation

no code implementations ICCV 2019 Michael Moeller, Thomas Möllenhoff, Daniel Cremers

The last decade has shown a tremendous success in solving various computer vision problems with the help of deep learning techniques.

Computed Tomography (CT) Deblurring +2

Combinatorial Preconditioners for Proximal Algorithms on Graphs

no code implementations16 Jan 2018 Thomas Möllenhoff, Zhenzhang Ye, Tao Wu, Daniel Cremers

We present a novel preconditioning technique for proximal optimization methods that relies on graph algorithms to construct effective preconditioners.

BIG-bench Machine Learning

Proximal Backpropagation

1 code implementation ICLR 2018 Thomas Frerix, Thomas Möllenhoff, Michael Moeller, Daniel Cremers

Specifically, we show that backpropagation of a prediction error is equivalent to sequential gradient descent steps on a quadratic penalty energy, which comprises the network activations as variables of the optimization.

Sublabel-Accurate Discretization of Nonconvex Free-Discontinuity Problems

no code implementations ICCV 2017 Thomas Möllenhoff, Daniel Cremers

In this work we show how sublabel-accurate multilabeling approaches can be derived by approximating a classical label-continuous convex relaxation of nonconvex free-discontinuity problems.

Sublabel-Accurate Convex Relaxation of Vectorial Multilabel Energies

1 code implementation7 Apr 2016 Emanuel Laude, Thomas Möllenhoff, Michael Moeller, Jan Lellmann, Daniel Cremers

Convex relaxations of nonconvex multilabel problems have been demonstrated to produce superior (provably optimal or near-optimal) solutions to a variety of classical computer vision problems.

Color Image Denoising Image Denoising +1

Sublabel-Accurate Relaxation of Nonconvex Energies

2 code implementations CVPR 2016 Thomas Möllenhoff, Emanuel Laude, Michael Moeller, Jan Lellmann, Daniel Cremers

We propose a novel spatially continuous framework for convex relaxations based on functional lifting.

The Primal-Dual Hybrid Gradient Method for Semiconvex Splittings

no code implementations7 Jul 2014 Thomas Möllenhoff, Evgeny Strekalovskiy, Michael Moeller, Daniel Cremers

This paper deals with the analysis of a recent reformulation of the primal-dual hybrid gradient method [Zhu and Chan 2008, Pock, Cremers, Bischof and Chambolle 2009, Esser, Zhang and Chan 2010, Chambolle and Pock 2011], which allows to apply it to nonconvex regularizers as first proposed for truncated quadratic penalization in [Strekalovskiy and Cremers 2014].

Cannot find the paper you are looking for? You can Submit a new open access paper.