Towards Best Practice in Explaining Neural Network Decisions with LRP

Within the last decade, neural network based predictors have demonstrated impressive - and at times super-human - capabilities. This performance is often paid for with an intransparent prediction process and thus has sparked numerous contributions in the novel field of explainable artificial intelligence (XAI). In this paper, we focus on a popular and widely used method of XAI, the Layer-wise Relevance Propagation (LRP). Since its initial proposition LRP has evolved as a method, and a best practice for applying the method has tacitly emerged, based however on humanly observed evidence alone. In this paper we investigate - and for the first time quantify - the effect of this current best practice on feedforward neural networks in a visual object detection setting. The results verify that the layer-dependent approach to LRP applied in recent literature better represents the model's reasoning, and at the same time increases the object localization and class discriminativity of LRP.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Detection PASCAL VOC 2012 LRPCMP:a1+ MAP 34.66 # 7
Object Detection PASCAL VOC 2012 LRPCMP:a2+ MAP 42.1 # 6
Object Detection SIXray LRPz 1 in 10 R@5 0.01347 # 1

Methods


No methods listed for this paper. Add relevant methods here