Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
---|---|
ID | resnet18 |
SHOW MORE |
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
---|---|
ID | resnet26 |
SHOW MORE |
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
---|---|
ID | resnet34 |
SHOW MORE |
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
---|---|
ID | resnet50 |
SHOW MORE |
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax, Blur Pooling |
---|---|
ID | resnetblur50 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | tv_resnet101 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | tv_resnet152 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | tv_resnet34 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | 1x1 Convolution, Bottleneck Residual Block, Batch Normalization, Convolution, Global Average Pooling, Residual Block, Residual Connection, ReLU, Max Pooling, Softmax |
ID | tv_resnet50 |
SHOW MORE |
Residual Networks, or ResNets, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack residual blocks ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.
To load a pretrained model:
import timm
m = timm.create_model('resnet18', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. resnet18
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
archivePrefix = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
resnetblur50 | 79.29% | 94.64% |
resnet50 | 79.04% | 94.39% |
tv_resnet152 | 78.32% | 94.05% |
tv_resnet101 | 77.37% | 93.56% |
tv_resnet50 | 76.16% | 92.88% |
resnet26 | 75.29% | 92.57% |
resnet34 | 75.11% | 92.28% |
tv_resnet34 | 73.3% | 91.42% |
resnet18 | 69.74% | 89.09% |