Browse SoTA > Methodology > Interpretable Machine Learning

Interpretable Machine Learning

46 papers with code · Methodology

The goal of Interpretable Machine Learning is to allow oversight and understanding of machine-learned decisions. Much of the work in Interpretable Machine Learning has come in the form of devising methods to better explain the predictions of machine learning models.

Source: Assessing the Local Interpretability of Machine Learning Models

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

ProtoAttend: Attention-Based Prototypical Learning

17 Feb 2019google-research/google-research

We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes.

DECISION MAKING INTERPRETABLE MACHINE LEARNING

Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting

19 Dec 2019google-research/google-research

Multi-horizon forecasting problems often contain a complex mix of inputs -- including static (i. e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically -- without any prior information on how they interact with the target.

INTERPRETABLE MACHINE LEARNING TIME SERIES TIME SERIES FORECASTING

Neural Additive Models: Interpretable Machine Learning with Neural Nets

29 Apr 2020google-research/google-research

NAMs learn a linear combination of neural networks that each attend to a single input feature.

DECISION MAKING INTERPRETABLE MACHINE LEARNING

SmoothGrad: removing noise by adding noise

12 Jun 2017slundberg/shap

Explaining the output of a deep network remains a challenge.

INTERPRETABLE MACHINE LEARNING

A Unified Approach to Interpreting Model Predictions

NeurIPS 2017 slundberg/shap

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FEATURE IMPORTANCE INTERPRETABLE MACHINE LEARNING

Learning Important Features Through Propagating Activation Differences

ICML 2017 slundberg/shap

Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input.

INTERPRETABLE MACHINE LEARNING

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

16 Feb 2016marcotcr/lime

Despite widespread adoption, machine learning models remain mostly black boxes.

IMAGE CLASSIFICATION INTERPRETABLE MACHINE LEARNING

Understanding Neural Networks Through Deep Visualization

22 Jun 2015yosinski/deep-visualization-toolbox

The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e. g. a live webcam stream).

INTERPRETABLE MACHINE LEARNING

iNNvestigate neural networks!

13 Aug 2018albermax/innvestigate

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

INTERPRETABLE MACHINE LEARNING