Interpretable Machine Learning

189 papers with code • 1 benchmarks • 4 datasets

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Libraries

Use these libraries to find Interpretable Machine Learning models and implementations
6 papers
4,580
4 papers
1,292
3 papers
21,668
3 papers
21,666
See all 10 libraries.

Latest papers with no code

Explaining Kernel Clustering via Decision Trees

no code yet • 15 Feb 2024

Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods.

Large Language Model-Based Interpretable Machine Learning Control in Building Energy Systems

no code yet • 14 Feb 2024

The potential of Machine Learning Control (MLC) in HVAC systems is hindered by its opaque nature and inference mechanisms, which is challenging for users and modelers to fully comprehend, ultimately leading to a lack of trust in MLC-based decision-making.

Challenges in Variable Importance Ranking Under Correlation

no code yet • 5 Feb 2024

Recently, several adjustments to marginal permutation utilizing feature knockoffs were proposed to address this issue, such as the variable importance measure known as conditional predictive impact (CPI).

Reducing Optimism Bias in Incomplete Cooperative Games

no code yet • 2 Feb 2024

Cooperative game theory has diverse applications in contemporary artificial intelligence, including domains like interpretable machine learning, resource allocation, and collaborative decision-making.

Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?

no code yet • 24 Jan 2024

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts.

Interactive Mars Image Content-Based Search with Interpretable Machine Learning

no code yet • 19 Jan 2024

The NASA Planetary Data System (PDS) hosts millions of images of planets, moons, and other bodies collected throughout many missions.

Enhancing Dynamical System Modeling through Interpretable Machine Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition

no code yet • 16 Jan 2024

We introduce a comprehensive data-driven framework aimed at enhancing the modeling of physical systems, employing inference techniques and machine learning enhancements.

X Hacking: The Threat of Misguided AutoML

no code yet • 16 Jan 2024

Explainable AI (XAI) and interpretable machine learning methods help to build trust in model predictions and derived insights, yet also present a perverse incentive for analysts to manipulate XAI metrics to support pre-specified conclusions.

Enabling Smart Retrofitting and Performance Anomaly Detection for a Sensorized Vessel: A Maritime Industry Experience

no code yet • 30 Dec 2023

This study presents a deep learning-driven anomaly detection system augmented with interpretable machine learning models for identifying performance anomalies in an industrial sensorized vessel, called TUCANA.

ProvFL: Client-Driven Interpretability of Global Model Predictions in Federated Learning

no code yet • 21 Dec 2023

Regardless of the quality of the global model or if it has a fault, understanding the model's origin is equally important for debugging, interpretability, and explainability in federated learning.