feature selection
553 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in feature selection
Libraries
Use these libraries to find feature selection models and implementationsMost implemented papers
Feature Selection: A Data Perspective
To facilitate and promote the research in this community, we also present an open-source feature selection repository that consists of most of the popular feature selection algorithms (\url{http://featureselection. asu. edu/}).
Auditing Black-box Models for Indirect Influence
It is therefore hard to acquire a deeper understanding of model behavior, and in particular how different features influence the model prediction.
Benchmarking Relief-Based Feature Selection Methods for Bioinformatics Data Mining
Modern biomedical data mining requires feature selection methods that can (1) be applied to large scale feature spaces (e. g. `omics' data), (2) function in noisy problems, (3) detect complex patterns of association (e. g. gene-gene interactions), (4) be flexibly adapted to various problem domains and data types (e. g. genetic variants, gene expression, and clinical data) and (5) are computationally tractable.
DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess
We present an end-to-end learning method for chess, relying on deep neural networks.
How Important Is a Neuron?
Informally, the conductance of a hidden unit of a deep network is the \emph{flow} of attribution via this hidden unit.
Model Agnostic Supervised Local Explanations
Some of the most common forms of interpretability systems are example-based, local, and global explanations.
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark
Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions.
IMMIGRATE: A Margin-based Feature Selection Method with Interaction Terms
Relief based algorithms have often been claimed to uncover feature interactions.
Concrete Autoencoders for Differentiable Feature Selection and Reconstruction
We introduce the concrete autoencoder, an end-to-end differentiable method for global feature selection, which efficiently identifies a subset of the most informative features and simultaneously learns a neural network to reconstruct the input data from the selected features.
A Feature Selection Based on Perturbation Theory
Consider a supervised dataset $D=[A\mid \textbf{b}]$, where $\textbf{b}$ is the outcome column, rows of $D$ correspond to observations, and columns of $A$ are the features of the dataset.