Explainable artificial intelligence

199 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Most implemented papers

Explaining How Deep Neural Networks Forget by Deep Visualization

giangnguyen2412/dissect_catastrophic_forgetting 3 May 2020

Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life.

Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset

etjoa003/explainable_ai 7 Sep 2020

Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward.

Towards Rigorous Interpretations: a Formalisation of Feature Attribution

DariusAf/functional_attribution 26 Apr 2021

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction.

Counterfactual Explanations as Interventions in Latent Space

FLE-ISP/CEILS 14 Jun 2021

Explainable Artificial Intelligence (XAI) is a set of techniques that allows the understanding of both technical and non-technical aspects of Artificial Intelligence (AI) systems.

Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property Prediction

biomed-AI/MolRep 1 Jul 2021

Advances in machine learning have led to graph neural network-based methods for drug discovery, yielding promising results in molecular design, chemical synthesis planning, and molecular property prediction.

Explaining deep learning models for spoofing and deepfake detection with SHapley Additive exPlanations

slundberg/shap 7 Oct 2021

Substantial progress in spoofing and deepfake detection has been made in recent years.

Counterfactual Shapley Additive Explanations

jpmorganchase/cf-shap 27 Oct 2021

Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.

Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI

ericotjo001/explainable_ai 30 Dec 2021

This paper quantifies the quality of heatmap-based eXplainable AI (XAI) methods w. r. t image classification problem.

GAM(e) changer or not? An evaluation of interpretable machine learning models based on additive model constraints

interpretml/interpret 19 Apr 2022

The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models.

From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation

rachtibat/zennit-crp 7 Jun 2022

In this work we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the "where" and "what" questions for individual predictions.