Feature Importance

246 papers with code • 6 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Feature Importance models and implementations

Most implemented papers

FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction

shenweichen/DeepCTR 23 May 2019

In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions.

A Unified Approach to Interpreting Model Predictions

slundberg/shap NeurIPS 2017

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications.

FAT-DeepFFM: Field Attentive Deep Field-aware Factorization Machine

PaddlePaddle/PaddleRec 15 May 2019

Although some CTR model such as Attentional Factorization Machine (AFM) has been proposed to model the weight of second order interaction features, we posit the evaluation of feature importance before explicit feature interaction procedure is also important for CTR prediction tasks because the model can learn to selectively highlight the informative features and suppress less useful ones if the task has many input features.

RISE: Randomized Input Sampling for Explanation of Black-box Models

eclique/RISE 19 Jun 2018

We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments.

Attention is not Explanation

successar/AttentionExplanation NAACL 2019

Attention mechanisms have seen wide adoption in neural NLP models.

Interpretable machine learning: definitions, methods, and applications

csinva/imodels 14 Jan 2019

Official code for using / reproducing ACD (ICLR 2019) from the paper "Hierarchical interpretations for neural network predictions" https://arxiv. org/abs/1806. 05337

Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees

csinva/disentangled-attribution-curves 18 May 2019

Tree ensembles, such as random forests and AdaBoost, are ubiquitous machine learning models known for achieving strong predictive performance across a wide variety of domains.

Distributed and parallel time series feature extraction for industrial big data applications

blue-yonder/tsfresh 25 Oct 2016

This problem is especially hard to solve for time series classification and regression in industrial applications such as predictive maintenance or production line optimization, for which each label or regression target is associated with several time series and meta-information simultaneously.

A Benchmark for Interpretability Methods in Deep Neural Networks

google-research/google-research NeurIPS 2019

We propose an empirical measure of the approximate accuracy of feature importance estimates in deep neural networks.

A Debiased MDI Feature Importance Measure for Random Forests

shifwang/paper-debiased-feature-importance NeurIPS 2019

Based on the original definition of MDI by Breiman et al. for a single tree, we derive a tight non-asymptotic bound on the expected bias of MDI importance of noisy features, showing that deep trees have higher (expected) feature selection bias than shallow ones.