Interpretable Machine Learning

187 papers with code • 1 benchmarks • 4 datasets

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Libraries

Use these libraries to find Interpretable Machine Learning models and implementations
6 papers
4,552
4 papers
1,287
3 papers
21,570
3 papers
21,567
See all 10 libraries.

Latest papers with no code

Online Learning of Decision Trees with Thompson Sampling

no code yet • 9 Apr 2024

Recent breakthroughs addressed this suboptimality issue in the batch setting, but no such work has considered the online setting with data arriving in a stream.

Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More

no code yet • 5 Apr 2024

Prediction of battery cycle life and estimation of aging states is important to accelerate battery R&D, testing, and to further the understanding of how batteries degrade.

Comprehensible Artificial Intelligence on Knowledge Graphs: A survey

no code yet • 4 Apr 2024

Thus, we provide in this survey a case for Comprehensible Artificial Intelligence on Knowledge Graphs consisting of Interpretable Machine Learning on Knowledge Graphs and Explainable Artificial Intelligence on Knowledge Graphs.

Interpretable Machine Learning for Weather and Climate Prediction: A Survey

no code yet • 24 Mar 2024

Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction.

Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures

no code yet • 20 Mar 2024

In this work, we introduce a new variant of the resonator network, based on self-attention based update rules in the iterative search problem.

What makes a small-world network? Leveraging machine learning for the robust prediction and classification of networks

no code yet • 20 Mar 2024

The ability to simulate realistic networks based on empirical data is an important task across scientific disciplines, from epidemiology to computer science.

A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data

no code yet • 8 Mar 2024

Diagnosing rare diseases presents a common challenge in clinical practice, necessitating the expertise of specialists for accurate identification.

Forecasting SEP Events During Solar Cycles 23 and 24 Using Interpretable Machine Learning

no code yet • 4 Mar 2024

Prediction of the Solar Energetic Particle (SEP) events garner increasing interest as space missions extend beyond Earth's protective magnetosphere.

LCEN: A Novel Feature Selection Algorithm for Nonlinear, Interpretable Machine Learning Models

no code yet • 27 Feb 2024

Interpretable architectures can have advantages over black-box architectures, and interpretability is essential for the application of machine learning in critical settings, such as aviation or medicine.

Explaining Kernel Clustering via Decision Trees

no code yet • 15 Feb 2024

Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods.