no code implementations • 4 Apr 2024 • Susanne Dandl, Kristin Blesch, Timo Freiesleben, Gunnar König, Jan Kapar, Bernd Bischl, Marvin Wright
Counterfactual explanations elucidate algorithmic decisions by pointing to scenarios that would have led to an alternative, desired outcome.
no code implementations • 8 Dec 2023 • Timo Freiesleben
Some go even further and believe that these concepts are stored in individual units of the network.
no code implementations • 7 Jun 2023 • Timo Freiesleben, Gunnar König
Despite progress in the field, significant parts of current XAI research are still not on solid conceptual, ethical, or methodological grounds.
1 code implementation • 27 Oct 2022 • Gunnar König, Timo Freiesleben, Moritz Grosse-Wentrup
We demonstrate that given correct causal knowledge, ICR, in contrast to existing approaches, guides towards both acceptance and improvement.
no code implementations • 11 Jun 2022 • Timo Freiesleben, Gunnar König, Christoph Molnar, Alvaro Tejero-Cantero
These descriptors are IML methods that provide insight not just into the model, but also into the properties of the phenomenon the model is designed to represent.
no code implementations • 3 Sep 2021 • Christoph Molnar, Timo Freiesleben, Gunnar König, Giuseppe Casalicchio, Marvin N. Wright, Bernd Bischl
Scientists and practitioners increasingly rely on machine learning to model data and draw conclusions.
no code implementations • 16 Jul 2021 • Gunnar König, Timo Freiesleben, Moritz Grosse-Wentrup
Thus, an action that changes the prediction in the desired way may not lead to an improvement of the underlying target.
1 code implementation • 15 Jun 2021 • Gunnar König, Timo Freiesleben, Bernd Bischl, Giuseppe Casalicchio, Moritz Grosse-Wentrup
Direct importance provides causal insight into the model's mechanism, yet it fails to expose the leakage of information from associated but not directly used variables.
no code implementations • 11 Sep 2020 • Timo Freiesleben
The same method that creates adversarial examples (AEs) to fool image-classifiers can be used to generate counterfactual explanations (CEs) that explain algorithmic decisions.
1 code implementation • 8 Jul 2020 • Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly.