Beyond Examples: Constructing Explanation Space for Explaining Prototypes

29 Sep 2021  ·  Hyungjun Joo, Seokhyeon Ha, Jae Myung Kim, Sungyeob Han, Jungwoo Lee ·

As deep learning has been successfully deployed in diverse applications, there is ever increasing need for explaining its decision. Most of the existing methods produced explanations with a second model that explains the first black-box model, but we propose an inherently interpretable model for more faithful explanations. Our method constructs an explanation space in which similarities in terms of human-interpretable features at images share similar latent representations by using a variational autoencoder. This explanation space provides additional explanations of the relationships, going beyond previous classification networks that provide explanations by distances and learned prototypes. In addition, our distance has more intrinsic meaning by VAE training techniques that regulate the latent space. With user study, we validate the quality of explanation space and additional explanations.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods