no code implementations • 2 Jan 2024 • Matthias Jakobs, Amal Saadallah
In this context, we propose a novel method for the online selection of tree-based models using the TreeSHAP explainability method in the task of time series forecasting.
no code implementations • 27 Jun 2023 • Sebastian Müller, Vanessa Toborek, Katharina Beckh, Matthias Jakobs, Christian Bauckhage, Pascal Welke
The Rashomon Effect describes the following phenomenon: for a given dataset there may exist many models with equally good performance but with different solution strategies.
1 code implementation • 17 Apr 2023 • Raphael Fischer, Matthias Jakobs, Katharina Morik
Advances in artificial intelligence need to become more resource-aware and sustainable.
1 code implementation • 22 Jan 2023 • Raoul Heese, Thore Gerlach, Sascha Mücke, Sabine Müller, Matthias Jakobs, Nico Piatkowski
The resulting attributions can be interpreted as explanations for why a specific circuit works well for a given task, improving the understanding of how to construct parameterized (or variational) quantum circuits, and fostering their human interpretability in general.
Explainable Artificial Intelligence (XAI) Quantum Machine Learning
no code implementations • 19 Jan 2023 • Raoul Heese, Sascha Mücke, Matthias Jakobs, Thore Gerlach, Nico Piatkowski
We propose a novel definition of Shapley values with uncertain value functions based on first principles using probability theory.
no code implementations • 21 May 2021 • Katharina Morik, Helena Kotthaus, Raphael Fischer, Sascha Mücke, Matthias Jakobs, Nico Piatkowski, Andreas Pauly, Lukas Heppe, Danny Heinrich
How can they be guaranteed for a certain implementation of a machine learning model?
no code implementations • 21 May 2021 • Katharina Beckh, Sebastian Müller, Matthias Jakobs, Vanessa Toborek, Hanxiao Tan, Raphael Fischer, Pascal Welke, Sebastian Houben, Laura von Rueden
This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability.