Regularization

Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.

METHOD YEAR PAPERS
Dropout
2014 3790
Weight Decay
1943 1462
Label Smoothing
1985 1265
Attention Dropout
2018 1234
Entropy Regularization
2016 147
Early Stopping
1995 111
Variational Dropout
2015 65
DropConnect
2013 48
R1 Regularization
2018 43
Embedding Dropout
2015 31
L1 Regularization
1986 30
Off-Diagonal Orthogonal Regularization
2018 29
Temporal Activation Regularization
2017 28
Activation Regularization
2017 28
DropBlock
2018 21
Target Policy Smoothing
2018 17
SpatialDropout
2014 17
GAN Feature Matching
2016 14
Stochastic Depth
2016 12
Zoneout
2016 9
Path Length Regularization
2019 8
Orthogonal Regularization
2016 8
Manifold Mixup
2018 7
Shake-Shake Regularization
2017 5
DropPath
2016 5
ShakeDrop
2018 3
Adaptive Dropout
2013 3
ScheduledDropPath
2017 2
Euclidean Norm Regularization
2019 2
Recurrent Dropout
2016 2
Discriminative Regularization
2016 1
Auxiliary Batch Normalization
2019 1
SRN
2019 1
rnnDrop
2015 0