Normalization

Conditional Instance Normalization

Introduced by Dumoulin et al. in A Learned Representation For Artistic Style

Conditional Instance Normalization is a normalization technique where all convolutional weights of a style transfer network are shared across many styles. The goal of the procedure is transform a layer’s activations $x$ into a normalized activation $z$ specific to painting style $s$. Building off instance normalization, we augment the $\gamma$ and $\beta$ parameters so that they’re $N \times C$ matrices, where $N$ is the number of styles being modeled and $C$ is the number of output feature maps. Conditioning on a style is achieved as follows:

$$ z = \gamma_{s}\left(\frac{x - \mu}{\sigma}\right) + \beta_{s}$$

where $\mu$ and $\sigma$ are $x$’s mean and standard deviation taken across spatial axes and $\gamma_{s}$ and $\beta_{s}$ are obtained by selecting the row corresponding to $s$ in the $\gamma$ and $\beta$ matrices. One added benefit of this approach is that one can stylize a single image into $N$ painting styles with a single feed forward pass of the network with a batch size of $N$.

Source: A Learned Representation For Artistic Style

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Restoration 1 50.00%
Style Transfer 1 50.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories