Feature Importance
248 papers with code • 6 benchmarks • 5 datasets
Libraries
Use these libraries to find Feature Importance models and implementationsMost implemented papers
Feature Selection and Feature Extraction in Pattern Analysis: A Literature Review
Pattern analysis often requires a pre-processing stage for extracting or selecting features in order to help the classification, prediction, or clustering stage discriminate or represent the data in a better way.
Computationally Efficient Feature Significance and Importance for Machine Learning Models
We develop a simple and computationally efficient significance test for the features of a machine learning model.
Benchmarking Attribution Methods with Relative Feature Importance
Despite active development, quantitative evaluation of feature attribution methods remains difficult due to the lack of ground truth: we do not know which input features are in fact important to a model.
Classification-Specific Parts for Improving Fine-Grained Visual Categorization
Fine-grained visual categorization is a classification task for distinguishing categories with high intra-class and small inter-class variance.
Self-attention for raw optical Satellite Time Series Classification
The amount of available Earth observation data has increased dramatically in the recent years.
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Feature importance estimates that inform users about the degree to which given inputs influence the output of a predictive model are crucial for understanding, validating, and interpreting machine-learning models.
Explanation by Progressive Exaggeration
As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical.
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.
Learning to Faithfully Rationalize by Construction
In NLP this often entails extracting snippets of an input text `responsible for' corresponding model output; when such a snippet comprises tokens that indeed informed the model's prediction, it is a faithful explanation.
DBA: Distributed Backdoor Attacks against Federated Learning
Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.