Feature Engineering
392 papers with code • 1 benchmarks • 5 datasets
Feature engineering is the process of taking a dataset and constructing explanatory variables — features — that can be used to train a machine learning model for a prediction problem. Often, data is spread across multiple tables and must be gathered into a single table with rows containing the observations and features in the columns.
The traditional approach to feature engineering is to build features one at a time using domain knowledge, a tedious, time-consuming, and error-prone process known as manual feature engineering. The code for manual feature engineering is problem-dependent and must be re-written for each new dataset.
Libraries
Use these libraries to find Feature Engineering models and implementationsMost implemented papers
Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems
Here we propose Knowledge-aware Graph Neural Networks with Label Smoothness regularization (KGNN-LS) to provide better recommendations.
Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization
We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems.
DeepSurv: Personalized Treatment Recommender System Using A Cox Proportional Hazards Deep Neural Network
We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations.
Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks
Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks.
Neural Vector Spaces for Unsupervised Information Retrieval
We propose the Neural Vector Space Model (NVSM), a method that learns representations of documents in an unsupervised manner for news article retrieval.
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics software.
Disfluency Detection using Auto-Correlational Neural Networks
In recent years, the natural language processing community has moved away from task-specific feature engineering, i. e., researchers discovering ad-hoc feature representations for various tasks, in favor of general-purpose methods that learn the input representation by themselves.
ML-Net: multi-label classification of biomedical texts with deep neural networks
Due to this nature, the multi-label text classification task is often considered to be more challenging compared to the binary or multi-class text classification problems.
SAFE ML: Surrogate Assisted Feature Extraction for Model Learning
Complex black-box predictive models may have high accuracy, but opacity causes problems like lack of trust, lack of stability, sensitivity to concept drift.
Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Tree ensembles, such as random forests and AdaBoost, are ubiquitous machine learning models known for achieving strong predictive performance across a wide variety of domains.