Explainable artificial intelligence

204 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Most implemented papers

TE2Rules: Explaining Tree Ensembles using Rules

linkedin/TE2Rules 29 Jun 2022

Tree Ensemble (TE) models, such as Gradient Boosted Trees, often achieve optimal performance on tabular datasets, yet their lack of transparency poses challenges for comprehending their decision logic.

Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective

josesousaribeiro/exirt-xai-pipeline 18 Oct 2022

In recent years, XAI researchers have been formalizing proposals and developing new methods to explain black box models, with no general consensus in the community on which method to use to explain these models, with this choice being almost directly linked to the popularity of a specific method.

Using Explainable AI and Transfer Learning to understand and predict the maintenance of Atlantic blocking with limited observational data

hzhang-math/Blocking_SHAP_TL 12 Apr 2024

This work demonstrates the potential for machine learning methods to extract meaningful precursors of extreme weather events and achieve better prediction using limited observational data.

Visual Interpretability for Deep Learning: a Survey

JepsonWong/CNN_Visualization 2 Feb 2018

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

sooeun67/xai 22 Oct 2019

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field.

Towards Best Practice in Explaining Neural Network Decisions with LRP

sebastian-lapuschkin/lrp_toolbox 22 Oct 2019

In this paper, we focus on a popular and widely used method of XAI, the Layer-wise Relevance Propagation (LRP).

bLIMEy: Surrogate Prediction Explanations Beyond LIME

So-Cool/bLIMEy 29 Oct 2019

Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).

Rule Extraction in Unsupervised Anomaly Detection for Model Explainability: Application to OneClass SVM

AlbertoBarbado/unsupervised-outlier-transparency 21 Nov 2019

In this paper, we evaluate several rule extraction techniques over OneClass SVM models, as well as present alternative designs for some of those algorithms.

What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations

ModelOriented/xaibot 7 Feb 2020

To our surprise, their development is driven by model developers rather than a study of needs for human end users.