Normalization

Instance-Level Meta Normalization

Introduced by Jia et al. in Instance-Level Meta Normalization

Instance-Level Meta Normalization is a normalization method that addresses a learning-to-normalize problem. ILM-Norm learns to predict the normalization parameters via both the feature feed-forward and the gradient back-propagation paths. It uses an auto-encoder to predict the weights $\omega$ and bias $\beta$ as the rescaling parameters for recovering the distribution of the tensor $x$ of feature maps. Instead of using the entire feature tensor $x$ as the input for the auto-encoder, it uses the mean $\mu$ and variance $\gamma$ of $x$ for characterizing its statistics.

Source: Instance-Level Meta Normalization

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories