Search Results for author: Christoph Molnar

Found 11 papers, 7 papers with code

Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena

no code implementations11 Jun 2022 Timo Freiesleben, Gunnar König, Christoph Molnar, Alvaro Tejero-Cantero

These descriptors are IML methods that provide insight not just into the model, but also into the properties of the phenomenon the model is designed to represent.

BIG-bench Machine Learning Interpretable Machine Learning

Marginal Effects for Non-Linear Prediction Functions

no code implementations21 Jan 2022 Christian A. Scholbeck, Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl, Christian Heumann

Hence, marginal effects are typically used as approximations for feature effects, either in the shape of derivatives of the prediction function or forward differences in prediction due to a change in a feature value.

Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges

no code implementations19 Oct 2020 Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl

To address the challenges and advance the field, we urge to recall our roots of interpretable, data-driven modeling in statistics and (rule-based) ML, but also to consider other areas such as sensitivity analysis, causal inference, and the social sciences.

BIG-bench Machine Learning Causal Inference +1

Relative Feature Importance

3 code implementations16 Jul 2020 Gunnar König, Christoph Molnar, Bernd Bischl, Moritz Grosse-Wentrup

Interpretable Machine Learning (IML) methods are used to gain insight into the relevance of a feature of interest for the performance of a model.

Feature Importance Interpretable Machine Learning

General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models

1 code implementation8 Jul 2020 Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.

BIG-bench Machine Learning Feature Importance

Model-agnostic Feature Importance and Effects with Dependent Features -- A Conditional Subgroup Approach

1 code implementation8 Jun 2020 Christoph Molnar, Gunnar König, Bernd Bischl, Giuseppe Casalicchio

In addition, we apply the conditional subgroups approach to partial dependence plots (PDP), a popular method for describing feature effects that can also suffer from extrapolation when features are dependent and interactions are present in the model.

Feature Importance

Multi-Objective Counterfactual Explanations

1 code implementation23 Apr 2020 Susanne Dandl, Christoph Molnar, Martin Binder, Bernd Bischl

We show the usefulness of MOC in concrete cases and compare our approach with state-of-the-art methods for counterfactual explanations.

counterfactual

Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability

2 code implementations8 Apr 2019 Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl

Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models.

BIG-bench Machine Learning Interpretable Machine Learning

Visualizing the Feature Importance for Black Box Models

1 code implementation18 Apr 2018 Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl

Based on local feature importance, we propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average, as well as for individual observations.

Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.