Search Results for author: Sören Dittmer

Found 18 papers, 8 papers with code

FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization

1 code implementation29 May 2024 Fan Zhang, Carlos Esteve-Yagüe, Sören Dittmer, Carola-Bibiane Schönlieb, Michael Roberts

This study contributes to PFL by establishing a solid theoretical foundation for the proposed method and offering a robust, ready-to-use framework that effectively addresses the challenges posed by non-IID data in FL.

Personalized Federated Learning

A study of why we need to reassess full reference image quality assessment with medical images

no code implementations29 May 2024 Anna Breger, Ander Biguri, Malena Sabaté Landman, Ian Selby, Nicole Amberg, Elisabeth Brunner, Janek Gröhl, Sepideh Hatamikia, Clemens Karner, Lipeng Ning, Sören Dittmer, Michael Roberts, AIX-COVNET Collaboration, Carola-Bibiane Schönlieb

Image quality assessment (IQA) is not just indispensable in clinical practice to ensure high standards, but also in the development stage of novel algorithms that operate on medical images with reference data.

Image Quality Assessment SSIM

The curious case of the test set AUROC

1 code implementation19 Dec 2023 Michael Roberts, Alon Hazan, Sören Dittmer, James H. F. Rudd, Carola-Bibiane Schönlieb

Whilst the size and complexity of ML models have rapidly and significantly increased over the past decade, the methods for assessing their performance have not kept pace.

Specificity

Navigating the challenges in creating complex data systems: a development philosophy

no code implementations21 Oct 2022 Sören Dittmer, Michael Roberts, Julian Gilbey, Ander Biguri, AIX-COVNET Collaboration, Jacobus Preller, James H. F. Rudd, John A. D. Aston, Carola-Bibiane Schönlieb

In this perspective, we argue that despite the democratization of powerful tools for data science and machine learning over the last decade, developing the code for a trustworthy and effective data science system (DSS) is getting harder.

Philosophy

SELTO: Sample-Efficient Learned Topology Optimization

no code implementations12 Sep 2022 Sören Dittmer, David Erzmann, Henrik Harms, Peter Maass

Recent developments in Deep Learning (DL) suggest a vast potential for Topology Optimization (TO).

Unsupervised Learning of the Total Variation Flow

1 code implementation9 Jun 2022 Tamara G. Grossmann, Sören Dittmer, Yury Korolev, Carola-Bibiane Schönlieb

Inspired by and extending the framework of physics-informed neural networks (PINNs), we propose the TVflowNET, an unsupervised neural network approach, to approximate the solution of the TV flow given an initial image and a time instance.

Texture Classification

Learned convex regularizers for inverse problems

1 code implementation6 Aug 2020 Subhadip Mukherjee, Sören Dittmer, Zakhar Shumaylov, Sebastian Lunz, Ozan Öktem, Carola-Bibiane Schönlieb

We consider the variational reconstruction framework for inverse problems and propose to learn a data-adaptive input-convex neural network (ICNN) as the regularization functional.

Computed Tomography (CT) Deblurring

Deep image prior for 3D magnetic particle imaging: A quantitative comparison of regularization techniques on Open MPI dataset

no code implementations3 Jul 2020 Sören Dittmer, Tobias Kluth, Mads Thorstein Roar Henriksen, Peter Maass

Magnetic particle imaging (MPI) is an imaging modality exploiting the nonlinear magnetization behavior of (super-)paramagnetic nanoparticles to obtain a space- and often also time-dependent concentration of a tracer consisting of these nanoparticles.

Image Reconstruction

Ground Truth Free Denoising by Optimal Transport

1 code implementation3 Jul 2020 Sören Dittmer, Carola-Bibiane Schönlieb, Peter Maass

We present a learned unsupervised denoising method for arbitrary types of data, which we explore on images and one-dimensional signals.

Denoising Generative Adversarial Network

A Projectional Ansatz to Reconstruction

1 code implementation10 Jul 2019 Sören Dittmer, Peter Maass

Recently the field of inverse problems has seen a growing usage of mathematically only partially understood learned and non-learned priors.

Denoising

Invariance and Inverse Stability under ReLU

no code implementations ICLR 2019 Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maass

We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping.

Regularization by architecture: A deep prior approach for inverse problems

2 code implementations10 Dec 2018 Sören Dittmer, Tobias Kluth, Peter Maass, Daniel Otero Baguer

The present paper studies so-called deep image prior (DIP) techniques in the context of ill-posed inverse problems.

Singular Values for ReLU Layers

no code implementations6 Dec 2018 Sören Dittmer, Emily J. King, Peter Maass

By presenting on the one hand theoretical justifications, results, and interpretations of these two concepts and on the other hand numerical experiments and results of the ReLU singular values and the Gaussian mean width being applied to trained neural networks, we hope to give a comprehensive, singular-value-centric view of ReLU layers.

Analysis of Invariance and Robustness via Invertibility of ReLU-Networks

no code implementations25 Jun 2018 Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maaß

Studying the invertibility of deep neural networks (DNNs) provides a principled approach to better understand the behavior of these powerful models.

Cannot find the paper you are looking for? You can Submit a new open access paper.