1 code implementation • 19 Dec 2023 • Michael Roberts, Alon Hazan, Sören Dittmer, James H. F. Rudd, Carola-Bibiane Schönlieb
Whilst the size and complexity of ML models have rapidly and significantly increased over the past decade, the methods for assessing their performance have not kept pace.
no code implementations • 4 Oct 2023 • Fan Zhang, Daniel Kreuter, Yichen Chen, Sören Dittmer, Samuel Tull, Tolou Shadbahr, BloodCounts! Collaboration, Jacobus Preller, James H. F. Rudd, John A. D. Aston, Carola-Bibiane Schönlieb, Nicholas Gleadall, Michael Roberts
We give detailed recommendations to help improve the quality of the methodology development for federated learning in healthcare.
1 code implementation • 25 Jul 2023 • Sören Dittmer, Michael Roberts, Jacobus Preller, AIX COVNET, James H. F. Rudd, John A. D. Aston, Carola-Bibiane Schönlieb
We aim to provide the tools needed to fully harness the potential of survival analysis in deep learning.
no code implementations • 21 Oct 2022 • Sören Dittmer, Michael Roberts, Julian Gilbey, Ander Biguri, AIX-COVNET Collaboration, Jacobus Preller, James H. F. Rudd, John A. D. Aston, Carola-Bibiane Schönlieb
In this perspective, we argue that despite the democratization of powerful tools for data science and machine learning over the last decade, developing the code for a trustworthy and effective data science system (DSS) is getting harder.
no code implementations • 12 Sep 2022 • Sören Dittmer, David Erzmann, Henrik Harms, Peter Maass
Recent developments in Deep Learning (DL) suggest a vast potential for Topology Optimization (TO).
no code implementations • 16 Jun 2022 • Tolou Shadbahr, Michael Roberts, Jan Stanczuk, Julian Gilbey, Philip Teare, Sören Dittmer, Matthew Thorpe, Ramon Vinas Torne, Evis Sala, Pietro Lio, Mishal Patel, AIX-COVNET Collaboration, James H. F. Rudd, Tuomas Mirtti, Antti Rannikko, John A. D. Aston, Jing Tang, Carola-Bibiane Schönlieb
Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial.
1 code implementation • 9 Jun 2022 • Tamara G. Grossmann, Sören Dittmer, Yury Korolev, Carola-Bibiane Schönlieb
Inspired by and extending the framework of physics-informed neural networks (PINNs), we propose the TVflowNET, an unsupervised neural network approach, to approximate the solution of the TV flow given an initial image and a time instance.
1 code implementation • 6 Aug 2020 • Subhadip Mukherjee, Sören Dittmer, Zakhar Shumaylov, Sebastian Lunz, Ozan Öktem, Carola-Bibiane Schönlieb
We consider the variational reconstruction framework for inverse problems and propose to learn a data-adaptive input-convex neural network (ICNN) as the regularization functional.
1 code implementation • 3 Jul 2020 • Sören Dittmer, Carola-Bibiane Schönlieb, Peter Maass
We present a learned unsupervised denoising method for arbitrary types of data, which we explore on images and one-dimensional signals.
no code implementations • 3 Jul 2020 • Sören Dittmer, Tobias Kluth, Mads Thorstein Roar Henriksen, Peter Maass
Magnetic particle imaging (MPI) is an imaging modality exploiting the nonlinear magnetization behavior of (super-)paramagnetic nanoparticles to obtain a space- and often also time-dependent concentration of a tracer consisting of these nanoparticles.
1 code implementation • 10 Jul 2019 • Sören Dittmer, Peter Maass
Recently the field of inverse problems has seen a growing usage of mathematically only partially understood learned and non-learned priors.
no code implementations • ICLR 2019 • Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maass
We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping.
2 code implementations • 10 Dec 2018 • Sören Dittmer, Tobias Kluth, Peter Maass, Daniel Otero Baguer
The present paper studies so-called deep image prior (DIP) techniques in the context of ill-posed inverse problems.
no code implementations • 6 Dec 2018 • Sören Dittmer, Emily J. King, Peter Maass
By presenting on the one hand theoretical justifications, results, and interpretations of these two concepts and on the other hand numerical experiments and results of the ReLU singular values and the Gaussian mean width being applied to trained neural networks, we hope to give a comprehensive, singular-value-centric view of ReLU layers.
no code implementations • 25 Jun 2018 • Jens Behrmann, Sören Dittmer, Pascal Fernsel, Peter Maaß
Studying the invertibility of deep neural networks (DNNs) provides a principled approach to better understand the behavior of these powerful models.