no code implementations • 1 Jan 2021 • Yang Young Lu, Wenbo Guo, Xinyu Xing, William Noble
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.