Regularization

Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.

METHOD YEAR PAPERS
Dropout
2014 4318
Weight Decay
1943 1756
Attention Dropout
2018 1533
Label Smoothing
1985 1427
Entropy Regularization
2016 174
Early Stopping
1995 121
Variational Dropout
2015 74
R1 Regularization
2018 53
DropConnect
2013 52
Off-Diagonal Orthogonal Regularization
2018 38
Embedding Dropout
2015 36
Temporal Activation Regularization
2017 32
Activation Regularization
2017 32
L1 Regularization
1986 32
DropBlock
2018 26
Target Policy Smoothing
2018 18
SpatialDropout
2014 17
GAN Feature Matching
2016 14
Path Length Regularization
2019 13
Zoneout
2016 13
Stochastic Depth
2016 12
Manifold Mixup
2018 9
Orthogonal Regularization
2016 9
Shake-Shake Regularization
2017 5
DropPath
2016 5
ShakeDrop
2018 3
Euclidean Norm Regularization
2019 3
Adaptive Dropout
2013 3
Auxiliary Batch Normalization
2019 2
ScheduledDropPath
2017 2
Recurrent Dropout
2016 2
Discriminative Regularization
2016 1
LayerDrop
2019 1
Fraternal Dropout
2017 1
SRN
2019 1
rnnDrop
2015 0