Search Results for author: Julia Herbinger

Found 9 papers, 7 papers with code

Effector: A Python package for regional explanations

1 code implementation3 Apr 2024 Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio

Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.

Explaining Bayesian Optimization by Shapley Values Facilitates Human-AI Collaboration

no code implementations7 Mar 2024 Julian Rodemann, Federico Croppi, Philipp Arens, Yusuf Sale, Julia Herbinger, Bernd Bischl, Eyke Hüllermeier, Thomas Augustin, Conor J. Walsh, Giuseppe Casalicchio

We address this issue by proposing ShapleyBO, a framework for interpreting BO's proposals by game-theoretic Shapley values. They quantify each parameter's contribution to BO's acquisition function.

Bayesian Optimization Gaussian Processes

Leveraging Model-based Trees as Interpretable Surrogate Models for Model Distillation

1 code implementation4 Oct 2023 Julia Herbinger, Susanne Dandl, Fiona K. Ewald, Sofia Loibl, Giuseppe Casalicchio

Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models via model distillation.

Decomposing Global Feature Effects Based on Feature Interactions

2 code implementations1 Jun 2023 Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio

We formally introduce generalized additive decomposition of global effects (GADGET), which is a new framework based on recursive partitioning to find interpretable regions in the feature space such that the interaction-related heterogeneity of local feature effects is minimized.

REPID: Regional Effect Plots with implicit Interaction Detection

1 code implementation15 Feb 2022 Julia Herbinger, Bernd Bischl, Giuseppe Casalicchio

Machine learning models can automatically learn complex relationships, such as non-linear and interaction effects.

BIG-bench Machine Learning Interpretable Machine Learning

Grouped Feature Importance and Combined Features Effect Plot

1 code implementation23 Apr 2021 Quay Au, Julia Herbinger, Clemens Stachl, Bernd Bischl, Giuseppe Casalicchio

However, for researchers and practitioners, it is often equally important to quantify the importance or visualize the effect of feature groups.

BIG-bench Machine Learning Feature Importance +1

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

1 code implementation8 Jul 2020 Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.

BIG-bench Machine Learning Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.