Model Selection

495 papers with code • 0 benchmarks • 1 datasets

Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

Source: Kernel-based Information Criterion

Libraries

Use these libraries to find Model Selection models and implementations

Most implemented papers

BERTScore: Evaluating Text Generation with BERT

Tiiiger/bert_score ICLR 2020

We propose BERTScore, an automatic evaluation metric for text generation.

Population Based Training of Neural Networks

MattKleinsmith/pbt 27 Nov 2017

Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm.

In Search of Lost Domain Generalization

facebookresearch/DomainBed ICLR 2021

As a first step, we realize that model selection is non-trivial for domain generalization tasks.

Data Splits and Metrics for Method Benchmarking on Surgical Action Triplet Datasets

CAMMA-public/cholect45 11 Apr 2022

We also develop a metrics library, ivtmetrics, for model evaluation on surgical triplets.

Deep Domain Confusion: Maximizing for Domain Invariance

erlendd/ddan 10 Dec 2014

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.

metric-learn: Metric Learning Algorithms in Python

scikit-learn-contrib/metric-learn 13 Aug 2019

metric-learn is an open source Python package implementing supervised and weakly-supervised distance metric learning algorithms.

Conditional Density Estimation Tools in Python and R with Applications to Photometric Redshifts and Likelihood-Free Cosmological Inference

tpospisi/rfcde 30 Aug 2019

We provide sample code in $\texttt{Python}$ and $\texttt{R}$ as well as examples of applications to photometric redshift estimation and likelihood-free cosmological inference via CDE.

Neural Vector Spaces for Unsupervised Information Retrieval

cvangysel/cuNVSM 9 Aug 2017

We propose the Neural Vector Space Model (NVSM), a method that learns representations of documents in an unsupervised manner for news article retrieval.

Learning Sparse Neural Networks through $L_0$ Regularization

AMLab-Amsterdam/L0_regularization 4 Dec 2017

We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid.

Tune: A Research Platform for Distributed Model Selection and Training

ray-project/ray 13 Jul 2018

We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.