Feature Importance
240 papers with code • 6 benchmarks • 5 datasets
Libraries
Use these libraries to find Feature Importance models and implementationsMost implemented papers
Collection and Validation of Psychophysiological Data from Professional and Amateur Players: a Multimodal eSports Dataset
An important feature of the dataset is simultaneous data collection from five players, which facilitates the analysis of sensor data on a team level.
Feature Importance-aware Transferable Adversarial Attacks
More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image.
Label-Free Explainability for Unsupervised Models
Unsupervised black-box models are challenging to interpret.
Interpretable machine learning for time-to-event prediction in medicine and healthcare
Time-to-event prediction, e. g. cancer survival analysis or hospital length of stay, is a highly prominent machine learning task in medical and healthcare applications.
Interpretation of Neural Networks is Fragile
In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations.
Towards Automatic Concept-based Explanations
Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.
Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations
Model-agnostic interpretation techniques allow us to explain the behavior of any predictive model.
Feature Selection and Feature Extraction in Pattern Analysis: A Literature Review
Pattern analysis often requires a pre-processing stage for extracting or selecting features in order to help the classification, prediction, or clustering stage discriminate or represent the data in a better way.
Computationally Efficient Feature Significance and Importance for Machine Learning Models
We develop a simple and computationally efficient significance test for the features of a machine learning model.
Benchmarking Attribution Methods with Relative Feature Importance
Despite active development, quantitative evaluation of feature attribution methods remains difficult due to the lack of ground truth: we do not know which input features are in fact important to a model.