1 code implementation • 19 Apr 2024 • Fiona Katharina Ewald, Ludwig Bothmann, Marvin N. Wright, Bernd Bischl, Giuseppe Casalicchio, Gunnar König
Understanding the DGP requires insights into feature-target associations, which many ML models cannot directly provide, due to their opaque internal mechanisms.
1 code implementation • 3 Apr 2024 • Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio
Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.
no code implementations • 7 Mar 2024 • Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio
We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.
no code implementations • 20 Dec 2023 • Christian A. Scholbeck, Julia Moosbauer, Giuseppe Casalicchio, Hoshin Gupta, Bernd Bischl, Christian Heumann
We argue that interpretations of machine learning (ML) models or the model-building process can bee seen as a form of sensitivity analysis (SA), a general methodology used to explain complex systems in many fields such as environmental modeling, engineering, or economics.
1 code implementation • 4 Oct 2023 • Julia Herbinger, Susanne Dandl, Fiona K. Ewald, Sofia Loibl, Giuseppe Casalicchio
Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation.
no code implementations • 3 Oct 2023 • Holger Löwe, Christian A. Scholbeck, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio
Forward marginal effects (FMEs) have recently been introduced as a versatile and effective model-agnostic interpretation method.
2 code implementations • 1 Jun 2023 • Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio
We formally introduce generalized additive decomposition of global effects (GADGET), which is a new framework based on recursive partitioning to find interpretable regions in the feature space such that the interaction-related heterogeneity of local feature effects is minimized.
no code implementations • 4 May 2023 • Susanne Dandl, Giuseppe Casalicchio, Bernd Bischl, Ludwig Bothmann
This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations.
no code implementations • 13 Apr 2023 • Susanne Dandl, Andreas Hofheinz, Martin Binder, Bernd Bischl, Giuseppe Casalicchio
Counterfactual explanation methods provide information on how feature values of individual observations must be changed to obtain a desired prediction.
no code implementations • 21 Sep 2022 • Christian A. Scholbeck, Henri Funk, Giuseppe Casalicchio
The partial dependence for clustering evaluates average changes in cluster assignments for the entire feature space.
1 code implementation • 11 Jun 2022 • Julia Moosbauer, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves.
1 code implementation • 15 Feb 2022 • Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio
Machine learning models can automatically learn complex relationships, such as non-linear and interaction effects.
no code implementations • 21 Jan 2022 • Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann
Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value.
1 code implementation • NeurIPS 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
no code implementations • 3 Sep 2021 • Christoph Molnar, Timo Freiesleben, Gunnar König, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl
Scientists and practitioners increasingly rely on machine learning to model data and draw conclusions.
no code implementations • 28 Jul 2021 • Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl
It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).
1 code implementation • 15 Jun 2021 • Gunnar König, Timo Freiesleben, Bernd Bischl, Giuseppe Casalicchio, Moritz Grosse-Wentrup
Direct importance provides causal insight into the model's mechanism, yet it fails to expose the leakage of information from associated but not directly used variables.
no code implementations • ICML Workshop AutoML 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
1 code implementation • 23 Apr 2021 • Quay Au, Julia Herbinger, Clemens Stachl, Bernd Bischl, Giuseppe Casalicchio
However, for researchers and practitioners, it is often equally important to quantify the importance or visualize the effect of feature groups.
no code implementations • 19 Oct 2020 • Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences.
1 code implementation • 8 Jul 2020 • Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.
1 code implementation • 8 Jun 2020 • Christoph Molnar, Gunnar König, Bernd Bischl, Giuseppe Casalicchio
In addition, we apply the conditional subgroups approach to partial dependence plots (PDP), a popular method for describing feature effects that can also suffer from extrapolation when features are dependent and interactions are present in the model.
2 code implementations • 8 Apr 2019 • Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models.
2 code implementations • 8 Apr 2019 • Christian A. Scholbeck, Christoph Molnar, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio
Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.
no code implementations • 8 Apr 2019 • Quay Au, Daniel Schalk, Giuseppe Casalicchio, Ramona Schoedel, Clemens Stachl, Bernd Bischl
One way to address this problem is the so called problem transformation method.
1 code implementation • 18 Apr 2018 • Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl
Based on local feature importance, we propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations.
4 code implementations • 11 Aug 2017 • Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Pieter Gijsbers, Frank Hutter, Michel Lang, Rafael G. Mantovani, Jan N. van Rijn, Joaquin Vanschoren
Machine learning research depends on objectively interpretable, comparable, and reproducible algorithm benchmarks.
1 code implementation • 27 Mar 2017 • Philipp Probst, Quay Au, Giuseppe Casalicchio, Clemens Stachl, Bernd Bischl
We implemented several multilabel classification algorithms in the machine learning package mlr.
1 code implementation • 5 Jan 2017 • Giuseppe Casalicchio, Jakob Bossek, Michel Lang, Dominik Kirchhoff, Pascal Kerschke, Benjamin Hofner, Heidi Seibold, Joaquin Vanschoren, Bernd Bischl
We show how the OpenML package allows R users to easily search, download and upload data sets and machine learning tasks.