Explainable artificial intelligence

199 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Latest papers with no code

Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors

no code yet • 25 Mar 2024

Explainable Artificial Intelligence (XAI) strategies play a crucial part in increasing the understanding and trustworthiness of neural networks.

The Anatomy of Adversarial Attacks: Concept-based XAI Dissection

no code yet • 25 Mar 2024

Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks.

Enhancing UAV Security Through Zero Trust Architecture: An Advanced Deep Learning and Explainable AI Analysis

no code yet • 25 Mar 2024

In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures.

The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI

no code yet • 23 Mar 2024

Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI, especially within the healthcare industry.

A survey on Concept-based Approaches For Model Improvement

no code yet • 21 Mar 2024

Some recent works also use concepts for model improvement in terms of interpretability and generalization.

How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey

no code yet • 21 Mar 2024

Despite its technological breakthroughs, eXplainable Artificial Intelligence (XAI) research has limited success in producing the {\em effective explanations} needed by users.

What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks

no code yet • 19 Mar 2024

Despite significant progress, evaluation of explainable artificial intelligence remains elusive and challenging.

Deciphering AutoML Ensembles: cattleia's Assistance in Decision-Making

no code yet • 19 Mar 2024

In many applications, model ensembling proves to be better than a single predictive model.

Safety Implications of Explainable Artificial Intelligence in End-to-End Autonomous Driving

no code yet • 18 Mar 2024

The end-to-end learning pipeline is gradually creating a paradigm shift in the ongoing development of highly autonomous vehicles, largely due to advances in deep learning, the availability of large-scale training datasets, and improvements in integrated sensor devices.

Towards a general framework for improving the performance of classifiers using XAI methods

no code yet • 15 Mar 2024

Modern Artificial Intelligence (AI) systems, especially Deep Learning (DL) models, poses challenges in understanding their inner workings by AI researchers.