Model Selection
495 papers with code • 0 benchmarks • 1 datasets
Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.
Benchmarks
These leaderboards are used to track progress in Model Selection
Libraries
Use these libraries to find Model Selection models and implementationsMost implemented papers
Variational Bayesian Monte Carlo
We introduce here a novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC).
Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning
The correct use of model evaluation, model selection, and algorithm selection techniques is vital in academic machine learning research as well as in many industrial settings.
Large Scale Correlation Clustering Optimization
This analogy allows us to suggest several new optimization algorithms, which exploit the intrinsic "model-selection" capability of the functional to automatically recover the underlying number of clusters.
Scikit-learn: Machine Learning in Python
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems.
HybridSVD: When Collaborative Information is Not Enough
We propose a new hybrid algorithm that allows incorporating both user and item side information within the standard collaborative filtering technique.
A comparison of methods for model selection when estimating individual treatment effects
Instead of relying on a single method, multiple models fit by a diverse set of algorithms should be evaluated against each other using an objective function learned from the validation set.
Automatic Gradient Boosting
Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference.
Testing Conditional Independence in Supervised Learning Algorithms
We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the association between one or several features and a given outcome, conditional on a reduced feature set.
Forecasting with time series imaging
Recently, the use of time series features for forecast model averaging has been an emerging research focus in the forecasting community.
Interpretable multiclass classification by MDL-based rule lists
Interpretable classifiers have recently witnessed an increase in attention from the data mining community because they are inherently easier to understand and explain than their more complex counterparts.