EXACT: How to Train Your Accuracy

19 May 2022  ·  Ivan Karpukhin, Stanislav Dereka, Sergey Kolesnikov ·

Classification tasks are usually evaluated in terms of accuracy. However, accuracy is discontinuous and cannot be directly optimized using gradient ascent. Popular methods minimize cross-entropy, hinge loss, or other surrogate losses, which can lead to suboptimal results. In this paper, we propose a new optimization framework by introducing stochasticity to a model's output and optimizing expected accuracy, i.e. accuracy of the stochastic model. Extensive experiments on linear models and deep image classification show that the proposed optimization method is a powerful alternative to widely used classification losses.

PDF Abstract

Results from the Paper


Ranked #2 on Image Classification on SVHN (Percentage correct metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 PyramidNet Percentage correct 97.14 # 85
Image Classification CIFAR-100 PyramidNet Percentage correct 85.6 # 60
Image Classification MNIST PyramidNet Accuracy 99.66 # 14
Image Classification SVHN PyramidNet Percentage correct 97.88 # 2

Methods


No methods listed for this paper. Add relevant methods here