Interpretable Machine Learning
187 papers with code • 1 benchmarks • 4 datasets
The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.
Source: Assessing the Local Interpretability of Machine Learning Models
Libraries
Use these libraries to find Interpretable Machine Learning models and implementationsLatest papers with no code
Online Learning of Decision Trees with Thompson Sampling
Recent breakthroughs addressed this suboptimality issue in the batch setting, but no such work has considered the online setting with data arriving in a stream.
Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and More
Prediction of battery cycle life and estimation of aging states is important to accelerate battery R&D, testing, and to further the understanding of how batteries degrade.
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey
Thus, we provide in this survey a case for Comprehensible Artificial Intelligence on Knowledge Graphs consisting of Interpretable Machine Learning on Knowledge Graphs and Explainable Artificial Intelligence on Knowledge Graphs.
Interpretable Machine Learning for Weather and Climate Prediction: A Survey
Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction.
Self-Attention Based Semantic Decomposition in Vector Symbolic Architectures
In this work, we introduce a new variant of the resonator network, based on self-attention based update rules in the iterative search problem.
What makes a small-world network? Leveraging machine learning for the robust prediction and classification of networks
The ability to simulate realistic networks based on empirical data is an important task across scientific disciplines, from epidemiology to computer science.
A Concept-based Interpretable Model for the Diagnosis of Choroid Neoplasias using Multimodal Data
Diagnosing rare diseases presents a common challenge in clinical practice, necessitating the expertise of specialists for accurate identification.
Forecasting SEP Events During Solar Cycles 23 and 24 Using Interpretable Machine Learning
Prediction of the Solar Energetic Particle (SEP) events garner increasing interest as space missions extend beyond Earth's protective magnetosphere.
LCEN: A Novel Feature Selection Algorithm for Nonlinear, Interpretable Machine Learning Models
Interpretable architectures can have advantages over black-box architectures, and interpretability is essential for the application of machine learning in critical settings, such as aviation or medicine.
Explaining Kernel Clustering via Decision Trees
Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods.