1 code implementation • 15 Jul 2022 • Hugh Chen, Ian C. Covert, Scott M. Lundberg, Su-In Lee
Based on the various feature removal approaches, we describe the multiple types of Shapley value feature attributions and methods to calculate each one.
no code implementations • 30 Apr 2021 • Hugh Chen, Scott M. Lundberg, Su-In Lee
Local feature attribution methods are increasingly used to explain complex machine learning models.
2 code implementations • 11 May 2019 • Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, Su-In Lee
3) A new set of tools for understanding global model structure based on combining many local explanations of each prediction.
8 code implementations • 12 Feb 2018 • Scott M. Lundberg, Gabriel G. Erion, Su-In Lee
A unified approach to explain the output of any machine learning model.
no code implementations • 2 Dec 2017 • Gabriel Erion, Hugh Chen, Scott M. Lundberg, Su-In Lee
We also provide a simple way to visualize the reason why a patient's risk is low or high by assigning weight to the patient's past blood oxygen values.
1 code implementation • 19 Jun 2017 • Scott M. Lundberg, Su-In Lee
Note that a newer expanded version of this paper is now available at: arXiv:1802. 03888 It is critical in many applications to understand what features are important for a model, and why individual predictions were made.