Search Results for author: Ludwig Bothmann

Found 9 papers, 4 papers with code

mlr3summary: Concise and interpretable summaries for machine learning models

1 code implementation25 Apr 2024 Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann

This work introduces a novel R package for concise, informative summaries of machine learning models.

A Guide to Feature Importance Methods for Scientific Inference

1 code implementation19 Apr 2024 Fiona Katharina Ewald, Ludwig Bothmann, Marvin N. Wright, Bernd Bischl, Giuseppe Casalicchio, Gunnar König

Understanding the DGP requires insights into feature-target associations, which many ML models cannot directly provide, due to their opaque internal mechanisms.

Feature Importance

Evaluating machine learning models in non-standard settings: An overview and new findings

no code implementations23 Oct 2023 Roman Hornung, Malte Nalenz, Lennart Schneider, Andreas Bender, Ludwig Bothmann, Bernd Bischl, Thomas Augustin, Anne-Laure Boulesteix

Our findings corroborate the concern that standard resampling methods often yield biased GE estimates in non-standard settings, underscoring the importance of tailored GE estimation.

Causal Fair Machine Learning via Rank-Preserving Interventional Distributions

1 code implementation24 Jul 2023 Ludwig Bothmann, Susanne Dandl, Michael Schomaker

A decision can be defined as fair if equal individuals are treated equally and unequals unequally.

Attribute Decision Making

Interpretable Regional Descriptors: Hyperbox-Based Local Explanations

no code implementations4 May 2023 Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann

This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.

Automated wildlife image classification: An active learning tool for ecological applications

2 code implementations28 Mar 2023 Ludwig Bothmann, Lisa Wimmer, Omid Charrakh, Tobias Weber, Hendrik Edelhoff, Wibke Peters, Hien Nguyen, Caryl Benjamin, Annette Menzel

(2) We provide an active learning (AL) system that allows training deep learning models very efficiently in terms of required human-labeled training images.

Active Learning Image Classification +2

What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds

no code implementations19 May 2022 Ludwig Bothmann, Kristina Peters, Bernd Bischl

A growing body of literature in fairness-aware ML (fairML) aspires to mitigate machine learning (ML)-related unfairness in automated decision-making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods that ensure that trained ML models achieve low values in those metrics.

Decision Making Fairness

Developing Open Source Educational Resources for Machine Learning and Data Science

no code implementations28 Jul 2021 Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl

It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.