Interpretable Machine Learning
189 papers with code • 1 benchmarks • 4 datasets
The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.
Source: Assessing the Local Interpretability of Machine Learning Models
Libraries
Use these libraries to find Interpretable Machine Learning models and implementationsLatest papers
Modelling wildland fire burn severity in California using a spatial Super Learner approach
We develop a machine learning model to predict post-fire burn severity using pre-fire remotely sensed data.
Neural Network Pruning by Gradient Descent
The rapid increase in the parameters of deep learning models has led to significant costs, challenging computational efficiency and model interpretability.
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtype
The accurate classification of lymphoma subtypes using hematoxylin and eosin (H&E)-stained tissue is complicated by the wide range of morphological features these cancers can exhibit.
An interpretable clustering approach to safety climate analysis: examining driver group distinction in safety climate perceptions
While existing data-driven safety climate studies have made remarkable progress, clustering employees based on their safety climate perception is innovative and has not been extensively utilized in research.
Climate Change Impact on Agricultural Land Suitability: An Interpretable Machine Learning-Based Eurasia Case Study
This study represents a pioneering effort in utilizing machine learning methods to assess the impact of climate change on agricultural land suitability under various carbon emissions scenarios.
Hyperspectral Blind Unmixing using a Double Deep Image Prior
With the rise of machine learning, hyperspectral image (HSI) unmixing problems have been tackled using learning-based methods.
Interpreting and Correcting Medical Image Classification with PIP-Net
We conclude that part-prototype models are promising for medical applications due to their interpretability and potential for advanced model debugging.
A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI
This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models.
Worth of knowledge in deep learning
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
Explainable Representation Learning of Small Quantum States
The insights from this study represent proof of concept towards interpretable machine learning of quantum states.