Evaluating Local Explanations using White-box Models

4 Jun 2021  ·  Amir Hossein Akhavan Rahnama, Judith Butepage, Pierre Geurts, Henrik Bostrom ·

Evaluating explanation techniques using human subjects is costly, time-consuming and can lead to subjectivity in the assessments. To evaluate the accuracy of local explanations, we require access to the true feature importance scores for a given instance. However, the prediction function of a model usually does not decompose into linear additive terms that indicate how much a feature contributes to the output. In this work, we suggest to instead focus on the log odds ratio (LOR) of the prediction function, which naturally decomposes into additive terms for logistic regression and naive Bayes. We demonstrate how we can benchmark different explanation techniques in terms of their similarity to the LOR scores based on our proposed approach. In the experiments, we compare prominent local explanation techniques and find that the performance of the techniques can depend on the underlying model, the dataset, which data point is explained, the normalization of the data and the similarity metric.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods