no code implementations • 26 Sep 2023 • Winston Chen, William Stafford Noble, Yang Young Lu
The complexity of deep neural networks (DNNs) makes them powerful but also makes them challenging to interpret, hindering their applicability in error-intolerant domains.
2 code implementations • bioRxiv 2023 • Melih Yilmaz, William E. Fondrie, Wout Bittremieux, Rowan Nelson, Varun Ananth, Sewoong Oh, William Stafford Noble
A fundamental challenge for any mass spectrometry-based proteomics experiment is the identification of the peptide that generated each acquired tandem mass spectrum.
1 code implementation • 3 Feb 2020 • Yang Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier.
no code implementations • 25 Sep 2019 • Yang Young Lu, Wenbo Guo, Xinyu Xing, William Stafford Noble
In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method.
1 code implementation • 8 Jun 2019 • Jacob Schreiber, Jeffrey Bilmes, William Stafford Noble
This paper presents an explanation of submodular selection, an overview of the features in apricot, and an application to several data sets.
no code implementations • NeurIPS 2018 • Wenruo Bai, William Stafford Noble, Jeff A. Bilmes
We study the problem of maximizing deep submodular functions (DSFs) subject to a matroid constraint.
1 code implementation • NeurIPS 2018 • Yang Young Lu, Yingying Fan, Jinchi Lv, William Stafford Noble
In this paper, we describe a method to increase the interpretability and reproducibility of DNNs by incorporating the idea of feature selection with controlled error rate.