Convolutional Neural Networks

Residual Network

Introduced by He et al. in Deep Residual Learning for Image Recognition

Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.

Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x):=\mathcal{H}(x)-x$. The original mapping is recast into $\mathcal{F}(x)+x$.

There is empirical evidence that these types of network are easier to optimize, and can gain accuracy from considerably increased depth.

Source: Deep Residual Learning for Image Recognition

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 65 10.12%
Self-Supervised Learning 53 8.26%
Classification 27 4.21%
Semantic Segmentation 22 3.43%
Object Detection 16 2.49%
Quantization 13 2.02%
Denoising 8 1.25%
Federated Learning 7 1.09%
Autonomous Driving 7 1.09%

Categories