Stochastic Optimization

Non-monotonically Triggered ASGD

Introduced by Merity et al. in Regularizing and Optimizing LSTM Language Models

NT-ASGD, or Non-monotonically Triggered ASGD, is an averaged stochastic gradient descent technique.

In regular ASGD, we take steps identical to regular SGD but instead of returning the last iterate as the solution, we return $\frac{1}{\left(K-T+1\right)}\sum^{T}_{i=T}w_{i}$, where $K$ is the total number of iterations and $T < K$ is a user-specified averaging trigger.

NT-ASGD has a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles. Given that the choice of triggering is irreversible, this conservatism ensures that the randomness of training does not play a major role in the decision.

Source: Regularizing and Optimizing LSTM Language Models

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories