Model Selection
497 papers with code • 0 benchmarks • 1 datasets
Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.
Benchmarks
These leaderboards are used to track progress in Model Selection
Libraries
Use these libraries to find Model Selection models and implementationsMost implemented papers
Predictive Multiplicity in Classification
We apply our tools to measure predictive multiplicity in recidivism prediction problems.
Deep Learning Algorithms for Rotating Machinery Intelligent Diagnosis: An Open Source Benchmark Study
Second, we integrate the whole evaluation codes into a code library and release this code library to the public for better development of this field.
Automatic Catalog of RRLyrae from $\sim$ 14 million VVV Light Curves: How far can we go with traditional machine-learning?
Finally, we show that the use of ensemble classifiers helps resolve the crucial model selection step, and that most errors in the identification of RRLs are related to low quality observations of some sources or to the difficulty to resolve the RRL-C type given the date.
Statistical Inference of Minimally Complex Models
These are spin models, with interactions of arbitrary order, that are composed of independent components of minimal complexity (Beretta et al., 2018).
Cardea: An Open Automated Machine Learning Framework for Electronic Health Records
An estimated 180 papers focusing on deep learning and EHR were published between 2010 and 2018.
Laplace Redux -- Effortless Bayesian Deep Learning
Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.
VolcanoML: Speeding up End-to-End AutoML via Scalable Search Space Decomposition
End-to-end AutoML has attracted intensive interests from both academia and industry, which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning.
LEATHER: A Framework for Learning to Generate Human-like Text in Dialogue
From this insight, we propose a new algorithm, and empirically, we demonstrate our proposal improves both task-success and human-likeness of the generated text.
Bolasso: model consistent Lasso estimation through the bootstrap
For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i. e., variable selection).
Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models
In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs.