Feature Engineering
392 papers with code • 1 benchmarks • 5 datasets
Feature engineering is the process of taking a dataset and constructing explanatory variables — features — that can be used to train a machine learning model for a prediction problem. Often, data is spread across multiple tables and must be gathered into a single table with rows containing the observations and features in the columns.
The traditional approach to feature engineering is to build features one at a time using domain knowledge, a tedious, time-consuming, and error-prone process known as manual feature engineering. The code for manual feature engineering is problem-dependent and must be re-written for each new dataset.
Libraries
Use these libraries to find Feature Engineering models and implementationsLatest papers with no code
PIPNet3D: Interpretable Detection of Alzheimer in MRI Scans
Information from neuroimaging examinations (CT, MRI) is increasingly used to support diagnoses of dementia, e. g., Alzheimer's disease.
Thelxinoë: Recognizing Human Emotions Using Pupillometry and Machine Learning
In this study, we present a method for emotion recognition in Virtual Reality (VR) using pupillometry.
VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections
Therefore, mini-batch training for graph transformers is a promising direction, but limited samples in each mini-batch can not support effective dense attention to encode informative representations.
Utilizing the LightGBM Algorithm for Operator User Credit Assessment Research
First, for the massive data related to user evaluation provided by operators, key features are extracted by data preprocessing and feature engineering methods, and a multi-dimensional feature set with statistical significance is constructed; then, linear regression, decision tree, LightGBM, and other machine learning algorithms build multiple basic models to find the best basic model; finally, integrates Averaging, Voting, Blending, Stacking and other integrated algorithms to refine multiple fusion models, and finally establish the most suitable fusion model for operator user evaluation.
DreamSampler: Unifying Diffusion Sampling and Score Distillation for Image Manipulation
Reverse sampling and score-distillation have emerged as main workhorses in recent years for image manipulation using latent diffusion models (LDMs).
Automated data processing and feature engineering for deep learning and big data applications: a survey
In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.
Scheduled Knowledge Acquisition on Lightweight Vector Symbolic Architectures for Brain-Computer Interfaces
To improve the accuracy of a small model, knowledge distillation is a popular method.
Uncertainty estimation in spatial interpolation of satellite precipitation with ensemble learning
This demonstrates the potential of stacking to improve probabilistic predictions in spatial interpolation and beyond.
The Impact of Frequency Bands on Acoustic Anomaly Detection of Machines using Deep Learning Based Model
In this paper, we propose a deep learning based model for Acoustic Anomaly Detection of Machines, the task for detecting abnormal machines by analysing the machine sound.
Defect Detection in Tire X-Ray Images: Conventional Methods Meet Deep Structures
This paper introduces a robust approach for automated defect detection in tire X-ray images by harnessing traditional feature extraction methods such as Local Binary Pattern (LBP) and Gray Level Co-Occurrence Matrix (GLCM) features, as well as Fourier and Wavelet-based features, complemented by advanced machine learning techniques.