Normalization

Weight Normalization is a normalization method for training neural networks. It is inspired by batch normalization, but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each $k$-dimentional weight vector $\textbf{w}$ in terms of a parameter vector $\textbf{v}$ and a scalar parameter $g$ and to perform stochastic gradient descent with respect to those parameters instead. Weight vectors are expressed in terms of the new parameters using:

$$ \textbf{w} = \frac{g}{\Vert\textbf{v}\Vert}\textbf{v}$$

where $\textbf{v}$ is a $k$-dimensional vector, $g$ is a scalar, and $\Vert\textbf{v}\Vert$ denotes the Euclidean norm of $\textbf{v}$. This reparameterization has the effect of fixing the Euclidean norm of the weight vector $\textbf{w}$: we now have $\Vert\textbf{w}\Vert = g$, independent of the parameters $\textbf{v}$.

Source: Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Speech Synthesis 12 12.37%
Quantization 6 6.19%
Image Classification 6 6.19%
Image Generation 5 5.15%
General Classification 4 4.12%
Decoder 3 3.09%
Model Compression 3 3.09%
BIG-bench Machine Learning 3 3.09%
Translation 3 3.09%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories