Paper

Computationally Efficient Feature Significance and Importance for Machine Learning Models

We develop a simple and computationally efficient significance test for the features of a machine learning model. Our forward-selection approach applies to any model specification, learning task and variable type. The test is non-asymptotic, straightforward to implement, and does not require model refitting. It identifies the statistically significant features as well as feature interactions of any order in a hierarchical manner, and generates a model-free notion of feature importance. Experimental and empirical results illustrate its performance.

Results in Papers With Code
(↓ scroll down to see all results)