no code implementations • 5 Oct 2023 • Amir Hossein Akhavan Rahnama
Using this proposed taxonomy, we highlight that all categories of evaluation methods, except those based on the ground truth from interpretable models, suffer from a problem we call the "blame problem."
1 code implementation • Data Mining and Knowledge Discovery 2023 • Amir Hossein Akhavan Rahnama, Judith Bütepage, Pierre Geurts, Henrik Boström
Local model-agnostic additive explanation techniques decompose the predicted output of a black-box model into additive feature importance scores.
no code implementations • 4 Mar 2022 • Amir Hossein Akhavan Rahnama, Judith Butepage
Instead of using black-box models, such as neural networks, we propose to focus on tree-based LTR models, from which we can extract the ground truth feature importance scores using decision paths.
no code implementations • 4 Jun 2021 • Amir Hossein Akhavan Rahnama, Judith Butepage, Pierre Geurts, Henrik Bostrom
Evaluating explanation techniques using human subjects is costly, time-consuming and can lead to subjectivity in the assessments.
no code implementations • 31 Oct 2019 • Amir Hossein Akhavan Rahnama, Henrik Boström
LIME is a popular approach for explaining a black-box prediction through an interpretable model that is trained on instances in the vicinity of the predicted instance.
no code implementations • 29 Mar 2018 • Amir Hossein Akhavan Rahnama, Mehdi Toloo, Nezer Jacob Zaidenberg
We apply our model to find hyperparameters of a language model and compare it to the grid search algorithm.
no code implementations • 27 Dec 2016 • Amir Hossein Akhavan Rahnama
The real challenge with real-time stream data processing is that it is impossible to store instances of data, and therefore online analytical algorithms are utilized.