Model Selection

497 papers with code • 0 benchmarks • 1 datasets

Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

Source: Kernel-based Information Criterion

Libraries

Use these libraries to find Model Selection models and implementations

Most implemented papers

Predictive Multiplicity in Classification

charliemarx/pmtools ICML 2020

We apply our tools to measure predictive multiplicity in recidivism prediction problems.

Deep Learning Algorithms for Rotating Machinery Intelligent Diagnosis: An Open Source Benchmark Study

ZhaoZhibin/DL-based-Intelligent-Diagnosis-Benchmark 6 Mar 2020

Second, we integrate the whole evaluation codes into a code library and release this code library to the public for better development of this field.

Automatic Catalog of RRLyrae from $\sim$ 14 million VVV Light Curves: How far can we go with traditional machine-learning?

carpyncho/carpyncho-py 1 May 2020

Finally, we show that the use of ensemble classifiers helps resolve the crucial model selection step, and that most errors in the identification of RRLs are related to low quality observations of some sources or to the difficulty to resolve the RRL-C type given the date.

Statistical Inference of Minimally Complex Models

clelidm/MinCompSpin 2 Aug 2020

These are spin models, with interactions of arbitrary order, that are composed of independent components of minimal complexity (Beretta et al., 2018).

Cardea: An Open Automated Machine Learning Framework for Electronic Health Records

DAI-Lab/Cardea 1 Oct 2020

An estimated 180 papers focusing on deep learning and EHR were published between 2010 and 2018.

Laplace Redux -- Effortless Bayesian Deep Learning

AlexImmer/Laplace NeurIPS 2021

Bayesian formulations of deep learning have been shown to have compelling theoretical properties and offer practical functional benefits, such as improved predictive uncertainty quantification and model selection.

VolcanoML: Speeding up End-to-End AutoML via Scalable Search Space Decomposition

PKU-DAIR/mindware 19 Jul 2021

End-to-end AutoML has attracted intensive interests from both academia and industry, which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning.

LEATHER: A Framework for Learning to Generate Human-like Text in Dialogue

anthonysicilia/leather-aacl2022 14 Oct 2022

From this insight, we propose a new algorithm, and empirically, we demonstrate our proposal improves both task-success and human-likeness of the generated text.

Bolasso: model consistent Lasso estimation through the bootstrap

dmolitor/bolasso 8 Apr 2008

For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i. e., variable selection).

Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models

zdk123/SpiecEasi NeurIPS 2010

In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs.