Classifier calibration

14 papers with code • 1 benchmarks • 1 datasets

Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).

Datasets


Most implemented papers

Packed-Ensembles for Efficient Uncertainty Estimation

ENSTA-U2IS-AI/torch-uncertainty 17 Oct 2022

Deep Ensembles (DE) are a prominent approach for achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection.

FedFA: Federated Learning with Feature Anchors to Align Features and Classifiers for Heterogeneous Data

TailinZhou/FedFA 17 Nov 2022

This enables client models to be updated in a shared feature space with consistent classifiers during local training.

Expeditious Saliency-guided Mix-up through Random Gradient Thresholding

minhlong94/random-mixup 9 Dec 2022

Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks.

No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier

zexilee/iccv-2023-fedetf ICCV 2023

Recent advances in neural collapse have shown that the classifiers and feature prototypes under perfect training scenarios collapse into an optimal structure called simplex equiangular tight frame (ETF).