ResNet

Last updated on Feb 14, 2021

resnet18

Parameters 12 Million
FLOPs 2 Billion
File Size 44.66 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID resnet18
Crop Pct 0.875
Image Size 224
Interpolation bilinear
SHOW MORE
SHOW LESS
resnet26

Parameters 16 Million
FLOPs 3 Billion
File Size 61.16 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID resnet26
Crop Pct 0.875
Image Size 224
Interpolation bicubic
SHOW MORE
SHOW LESS
resnet34

Parameters 22 Million
FLOPs 5 Billion
File Size 83.25 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID resnet34
Crop Pct 0.875
Image Size 224
Interpolation bilinear
SHOW MORE
SHOW LESS
resnet50

Parameters 26 Million
FLOPs 5 Billion
File Size 97.74 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID resnet50
Crop Pct 0.875
Image Size 224
Interpolation bicubic
SHOW MORE
SHOW LESS
resnetblur50

Parameters 26 Million
FLOPs 7 Billion
File Size 97.74 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Blur Pooling
ID resnetblur50
Crop Pct 0.875
Image Size 224
Interpolation bicubic
SHOW MORE
SHOW LESS
tv_resnet101

Parameters 45 Million
FLOPs 10 Billion
File Size 170.45 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID tv_resnet101
LR 0.1
Epochs 90
Crop Pct 0.875
LR Gamma 0.1
Momentum 0.9
Batch Size 32
Image Size 224
LR Step Size 30
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
tv_resnet152

Parameters 60 Million
FLOPs 15 Billion
File Size 230.34 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID tv_resnet152
LR 0.1
Epochs 90
Crop Pct 0.875
LR Gamma 0.1
Momentum 0.9
Batch Size 32
Image Size 224
LR Step Size 30
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
tv_resnet34

Parameters 22 Million
FLOPs 5 Billion
File Size 83.26 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID tv_resnet34
LR 0.1
Epochs 90
Crop Pct 0.875
LR Gamma 0.1
Momentum 0.9
Batch Size 32
Image Size 224
LR Step Size 30
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
tv_resnet50

Parameters 26 Million
FLOPs 5 Billion
File Size 97.75 MB
Training Data ImageNet
Training Resources
Training Time

Training Techniques SGD with Momentum, Weight Decay
Architecture 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax
ID tv_resnet50
LR 0.1
Epochs 90
Crop Pct 0.875
LR Gamma 0.1
Momentum 0.9
Batch Size 32
Image Size 224
LR Step Size 30
Weight Decay 0.0001
Interpolation bilinear
SHOW MORE
SHOW LESS
README.md

Summary

Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.

How do I load this model?

To load a pretrained model:

import timm
m = timm.create_model('resnet18', pretrained=True)
m.eval()

Replace the model name with the variant you want to use, e.g. resnet18. You can find the IDs in the model summaries at the top of this page.

How do I train this model?

You can follow the timm recipe scripts for training a new model afresh.

Citation

@article{DBLP:journals/corr/HeZRS15,
  author    = {Kaiming He and
               Xiangyu Zhang and
               Shaoqing Ren and
               Jian Sun},
  title     = {Deep Residual Learning for Image Recognition},
  journal   = {CoRR},
  volume    = {abs/1512.03385},
  year      = {2015},
  url       = {http://arxiv.org/abs/1512.03385},
  archivePrefix = {arXiv},
  eprint    = {1512.03385},
  timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Results

Image Classification on ImageNet

Image Classification on ImageNet
MODEL TOP 1 ACCURACY TOP 5 ACCURACY
resnetblur50 79.29% 94.64%
resnet50 79.04% 94.39%
tv_resnet152 78.32% 94.05%
tv_resnet101 77.37% 93.56%
tv_resnet50 76.16% 92.88%
resnet26 75.29% 92.57%
resnet34 75.11% 92.28%
tv_resnet34 73.3% 91.42%
resnet18 69.74% 89.09%