Feature Importance

248 papers with code • 6 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Feature Importance models and implementations

Most implemented papers

Feature Selection and Feature Extraction in Pattern Analysis: A Literature Review

bghojogh/Feature-Extraction-Survey 7 May 2019

Pattern analysis often requires a pre-processing stage for extracting or selecting features in order to help the classification, prediction, or clustering stage discriminate or represent the data in a better way.

Computationally Efficient Feature Significance and Importance for Machine Learning Models

fintechstanford/SFIT 23 May 2019

We develop a simple and computationally efficient significance test for the features of a machine learning model.

Benchmarking Attribution Methods with Relative Feature Importance

google-research-datasets/bim 23 Jul 2019

Despite active development, quantitative evaluation of feature attribution methods remains difficult due to the lack of ground truth: we do not know which input features are in fact important to a model.

Classification-Specific Parts for Improving Fine-Grained Visual Categorization

DiKorsch/l1_parts 16 Sep 2019

Fine-grained visual categorization is a classification task for distinguishing categories with high intra-class and small inter-class variance.

Self-attention for raw optical Satellite Time Series Classification

marccoru/crop-type-mapping 23 Oct 2019

The amount of available Earth observation data has increased dramatically in the recent years.

CXPlain: Causal Explanations for Model Interpretation under Uncertainty

d909b/cxplain NeurIPS 2019

Feature importance estimates that inform users about the degree to which given inputs influence the output of a predictive model are crucial for understanding, validating, and interpreting machine-learning models.

Explanation by Progressive Exaggeration

batmanlab/Explanation_by_Progressive_Exaggeration ICLR 2020

As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical.

Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI

ahmedmagdiosman/clevr-xai 16 Mar 2020

The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.

Learning to Faithfully Rationalize by Construction

successar/FRESH ACL 2020

In NLP this often entails extracting snippets of an input text `responsible for' corresponding model output; when such a snippet comprises tokens that indeed informed the model's prediction, it is a faithful explanation.

DBA: Distributed Backdoor Attacks against Federated Learning

AI-secure/DBA ICLR 2020

Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data.