Regularization

Temporal Activation Regularization

Introduced by Merity et al. in Revisiting Activation Regularization for Language RNNs

Temporal Activation Regularization (TAR) is a type of slowness regularization for RNNs that penalizes differences between states that have been explored in the past. Formally we minimize:

$$\beta{L_{2}}\left(h_{t} - h_{t+1}\right)$$

where $L_{2}$ is the $L_{2}$ norm, $h_{t}$ is the output of the RNN at timestep $t$, and $\beta$ is a scaling coefficient.

Source: Revisiting Activation Regularization for Language RNNs

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 20 18.02%
General Classification 14 12.61%
Text Classification 13 11.71%
Classification 8 7.21%
Sentiment Analysis 8 7.21%
Language Identification 4 3.60%
Translation 4 3.60%
Hate Speech Detection 3 2.70%
Sentence 3 2.70%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories