no code implementations • 11 Jun 2022 • Timo Freiesleben, Gunnar König, Christoph Molnar, Alvaro Tejero-Cantero
These descriptors are IML methods that provide insight not just into the model, but also into the properties of the phenomenon the model is designed to represent.
no code implementations • 21 Jan 2022 • Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann
Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value.
no code implementations • 3 Sep 2021 • Christoph Molnar, Timo Freiesleben, Gunnar König, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl
Scientists and practitioners increasingly rely on machine learning to model data and draw conclusions.
no code implementations • 19 Oct 2020 • Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences.
3 code implementations • 16 Jul 2020 • Gunnar König, Christoph Molnar, Bernd Bischl, Moritz Grosse-Wentrup
Interpretable Machine Learning (IML) methods are used to gain insight into the relevance of a feature of interest for the performance of a model.
1 code implementation • 8 Jul 2020 • Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.
1 code implementation • 8 Jun 2020 • Christoph Molnar, Gunnar König, Bernd Bischl, Giuseppe Casalicchio
In addition, we apply the conditional subgroups approach to partial dependence plots (PDP), a popular method for describing feature effects that can also suffer from extrapolation when features are dependent and interactions are present in the model.
1 code implementation • 23 Apr 2020 • Susanne Dandl, Christoph Molnar, Martin Binder, Bernd Bischl
We show the usefulness of MOC in concrete cases and compare our approach with state-of-the-art methods for counterfactual explanations.
2 code implementations • 8 Apr 2019 • Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models.
2 code implementations • 8 Apr 2019 • Christian A. Scholbeck, Christoph Molnar, Christian Heumann, Bernd Bischl, Giuseppe Casalicchio
Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.
1 code implementation • 18 Apr 2018 • Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl
Based on local feature importance, we propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations.