Model Selection

495 papers with code • 0 benchmarks • 1 datasets

Given a set of candidate models, the goal of Model Selection is to select the model that best approximates the observed data and captures its underlying regularities. Model Selection criteria are defined such that they strike a balance between the goodness of fit, and the generalizability or complexity of the models.

Source: Kernel-based Information Criterion

Libraries

Use these libraries to find Model Selection models and implementations

Latest papers with no code

A Sentiment Analysis of Medical Text Based on Deep Learning

no code yet • 16 Apr 2024

One of the research directions in text sentiment analysis is sentiment analysis of medical texts, which holds great potential for application in clinical diagnosis.

On the Necessity of Collaboration in Online Model Selection with Decentralized Data

no code yet • 15 Apr 2024

We consider online model selection with decentralized data over $M$ clients, and study a fundamental problem: the necessity of collaboration.

Measuring Domain Shifts using Deep Learning Remote Photoplethysmography Model Similarity

no code yet • 12 Apr 2024

Domain shift differences between training data for deep learning models and the deployment context can result in severe performance issues for models which fail to generalize.

Dimension-free Relaxation Times of Informed MCMC Samplers on Discrete Spaces

no code yet • 5 Apr 2024

Convergence analysis of Markov chain Monte Carlo methods in high-dimensional statistical applications is increasingly recognized.

A Methodology for Improving Accuracy of Embedded Spiking Neural Networks through Kernel Size Scaling

no code yet • 2 Apr 2024

Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications due to their sparse spike-based operations.

Beyond One-Size-Fits-All: Multi-Domain, Multi-Task Framework for Embedding Model Selection

no code yet • 30 Mar 2024

This position paper proposes a systematic approach towards developing a framework to help select the most effective embedding models for natural language processing (NLP) tasks, addressing the challenge posed by the proliferation of both proprietary and open-source encoder models.

Individual Text Corpora Predict Openness, Interests, Knowledge and Level of Education

no code yet • 29 Mar 2024

For training and validation, we relied on 179 participants and held out a test sample of 35 participants.

Bayesian Nonparametrics: An Alternative to Deep Learning

no code yet • 29 Mar 2024

Bayesian nonparametric models offer a flexible and powerful framework for statistical model selection, enabling the adaptation of model complexity to the intricacies of diverse datasets.

EL-MLFFs: Ensemble Learning of Machine Leaning Force Fields

no code yet • 26 Mar 2024

Machine learning force fields (MLFFs) have emerged as a promising approach to bridge the accuracy of quantum mechanical methods and the efficiency of classical force fields.

Carbon Intensity-Aware Adaptive Inference of DNNs

no code yet • 23 Mar 2024

DNN inference, known for its significant energy consumption and the resulting high carbon footprint, can be made more sustainable by adapting model size and accuracy to the varying carbon intensity throughout the day.