Active Learning

Snapshot Ensembles: Train 1, get M for free

Introduced by Huang et al. in Snapshot Ensembles: Train 1, get M for free

The overhead cost of training multiple deep neural networks could be very high in terms of the training time, hardware, and computational resource requirement and often acts as obstacle for creating deep ensembles. To overcome these barriers Huang et al. proposed a unique method to create ensemble which at the cost of training one model, yields multiple constituent model snapshots that can be ensembled together to create a strong learner. The core idea behind the concept is to make the model converge to several local minima along the optimization path and save the model parameters at these local minima points. During the training phase, a neural network would traverse through many such points. The lowest of all such local minima is known as the Global Minima. The larger the model, more are the number of parameters and larger the number of local minima points. This implies, there are discrete sets of weights and biases, at which the model is making fewer errors. So, every such minimum can be considered a weak but a potential learner model for the problem being solved. Multiple such snapshot of weights and biases are recorded which can later be ensembled to get a better generalized model which makes the least amount of mistakes.

Source: Snapshot Ensembles: Train 1, get M for free

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Ensemble Learning 2 28.57%
Medical Image Classification 1 14.29%
Inference Attack 1 14.29%
Membership Inference Attack 1 14.29%
Clustering 1 14.29%
Clustering Ensemble 1 14.29%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories