1 code implementation • 3 Apr 2024 • Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio
Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.
no code implementations • 7 Mar 2024 • Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio
We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.
1 code implementation • 4 Oct 2023 • Julia Herbinger, Susanne Dandl, Fiona K. Ewald, Sofia Loibl, Giuseppe Casalicchio
Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation.
2 code implementations • 1 Jun 2023 • Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio
We formally introduce generalized additive decomposition of global effects (GADGET), which is a new framework based on recursive partitioning to find interpretable regions in the feature space such that the interaction-related heterogeneity of local feature effects is minimized.
1 code implementation • 15 Feb 2022 • Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio
Machine learning models can automatically learn complex relationships, such as non-linear and interaction effects.
1 code implementation • NeurIPS 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
no code implementations • ICML Workshop AutoML 2021 • Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl
Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.
1 code implementation • 23 Apr 2021 • Quay Au, Julia Herbinger, Clemens Stachl, Bernd Bischl, Giuseppe Casalicchio
However, for researchers and practitioners, it is often equally important to quantify the importance or visualize the effect of feature groups.
1 code implementation • 8 Jul 2020 • Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.