Paper

A novel approach to generate datasets with XAI ground truth to evaluate image models

With the increased usage of artificial intelligence (AI), it is imperative to understand how these models work internally. These needs have led to the development of a new field called eXplainable artificial intelligence (XAI). This field consists of on a set of techniques that allows us to theoretically determine the cause of the AI decisions. One main issue of XAI is how to verify the works on this field, taking into consideration the lack of ground truth (GT). In this study, we propose a new method to generate datasets with GT. We conducted a set of experiments that compared our GT with real model explanations and obtained excellent results confirming that our proposed method is correct.

Results in Papers With Code
(↓ scroll down to see all results)