Search Results for author: Christian A. Scholbeck

Found 6 papers, 2 papers with code

Position Paper: Bridging the Gap Between Machine Learning and Sensitivity Analysis

no code implementations20 Dec 2023 Christian A. Scholbeck, Julia Moosbauer, Giuseppe Casalicchio, Hoshin Gupta, Bernd Bischl, Christian Heumann

We argue that interpretations of machine learning (ML) models or the model-building process can bee seen as a form of sensitivity analysis (SA), a general methodology used to explain complex systems in many fields such as environmental modeling, engineering, or economics.

Position

fmeffects: An R Package for Forward Marginal Effects

no code implementations3 Oct 2023 Holger Löwe, Christian A. Scholbeck, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio

Forward marginal effects (FMEs) have recently been introduced as a versatile and effective model-agnostic interpretation method.

Algorithm-Agnostic Interpretations for Clustering

no code implementations21 Sep 2022 Christian A. Scholbeck, Henri Funk, Giuseppe Casalicchio

The partial dependence for clustering evaluates average changes in cluster assignments for the entire feature space.

Clustering Dimensionality Reduction +1

Marginal Effects for Non-Linear Prediction Functions

no code implementations21 Jan 2022 Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann

Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value.

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

1 code implementation8 Jul 2020 Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.

BIG-bench Machine Learning Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.