1 code implementation • 25 Apr 2024 • Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann
This work introduces a novel R package for concise, informative summaries of machine learning models.
1 code implementation • 19 Apr 2024 • Fiona Katharina Ewald, Ludwig Bothmann, Marvin N. Wright, Bernd Bischl, Giuseppe Casalicchio, Gunnar König
Understanding the DGP requires insights into feature-target associations, which many ML models cannot directly provide, due to their opaque internal mechanisms.
no code implementations • 2 Feb 2024 • Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer
A major challenge in sample-based inference (SBI) for Bayesian neural networks is the size and structure of the networks' parameter space.
no code implementations • 23 Oct 2023 • Roman Hornung, Malte Nalenz, Lennart Schneider, Andreas Bender, Ludwig Bothmann, Bernd Bischl, Thomas Augustin, Anne-Laure Boulesteix
Our findings corroborate the concern that standard resampling methods often yield biased GE estimates in non-standard settings, underscoring the importance of tailored GE estimation.
1 code implementation • 24 Jul 2023 • Ludwig Bothmann, Susanne Dandl, Michael Schomaker
A decision can be defined as fair if equal individuals are treated equally and unequals unequally.
no code implementations • 4 May 2023 • Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann
This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.
2 code implementations • 28 Mar 2023 • Ludwig Bothmann, Lisa Wimmer, Omid Charrakh, Tobias Weber, Hendrik Edelhoff, Wibke Peters, Hien Nguyen, Caryl Benjamin, Annette Menzel
(2) We provide an active learning (AL) system that allows training deep learning models very efficiently in terms of required human-labeled training images.
no code implementations • 19 May 2022 • Ludwig Bothmann, Kristina Peters, Bernd Bischl
A growing body of literature in fairness-aware ML (fairML) aspires to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods that ensure that trained ML models achieve low values in those metrics.
no code implementations • 28 Jul 2021 • Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl
It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).