Skip Connection Blocks

Reversible Residual Block

Introduced by Gomez et al. in The Reversible Residual Network: Backpropagation Without Storing Activations

Reversible Residual Blocks are skip-connection blocks that learn reversible residual functions with reference to the layer inputs. It is proposed as part of the RevNet CNN architecture. Units in each layer are partitioned into two groups, denoted $x_{1}$ and $x_{2}$; the authors find what works best is partitioning the channels. Each reversible block takes inputs $\left(x_{1}, x_{2}\right)$ and produces outputs $\left(y_{1}, y_{2}\right)$ according to the following additive coupling rules – inspired by the transformation in NICE (nonlinear independent components estimation) – and residual functions $F$ and $G$ analogous to those in standard ResNets:

$$y_{1} = x_{1} + F\left(x_{2}\right)$$ $$y_{2} = x_{2} + G\left(y_{1}\right)$$

Each layer’s activations can be reconstructed from the next layer’s activations as follows:

$$ x_{2} = y_{2} − G\left(y_{1}\right)$$ $$ x_{1} = y_{1} − F\left(x_{2}\right)$$

Source: The Reversible Residual Network: Backpropagation Without Storing Activations

Papers


Paper Code Results Date Stars

Categories