Classifier calibration
14 papers with code • 1 benchmarks • 1 datasets
Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).
Most implemented papers
Packed-Ensembles for Efficient Uncertainty Estimation
Deep Ensembles (DE) are a prominent approach for achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection.
FedFA: Federated Learning with Feature Anchors to Align Features and Classifiers for Heterogeneous Data
This enables client models to be updated in a shared feature space with consistent classifiers during local training.
Expeditious Saliency-guided Mix-up through Random Gradient Thresholding
Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks.
No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier
Recent advances in neural collapse have shown that the classifiers and feature prototypes under perfect training scenarios collapse into an optimal structure called simplex equiangular tight frame (ETF).