Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, DPN Block, Dense Connections, Global Average Pooling, Max Pooling, Softmax |
ID | dpn107 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, DPN Block, Dense Connections, Global Average Pooling, Max Pooling, Softmax |
ID | dpn131 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, DPN Block, Dense Connections, Global Average Pooling, Max Pooling, Softmax |
ID | dpn68 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, DPN Block, Dense Connections, Global Average Pooling, Max Pooling, Softmax |
ID | dpn68b |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, DPN Block, Dense Connections, Global Average Pooling, Max Pooling, Softmax |
ID | dpn92 |
SHOW MORE |
Training Techniques | SGD with Momentum, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, DPN Block, Dense Connections, Global Average Pooling, Max Pooling, Softmax |
ID | dpn98 |
SHOW MORE |
A Dual Path Network (DPN) is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that ResNets enables feature re-usage while DenseNet enables new feature exploration, and both are important for learning good representations. To enjoy the benefits from both path topologies, Dual Path Networks share common features while maintaining the flexibility to explore new features through dual path architectures.
The principal building block is an DPN Block.
To load a pretrained model:
import timm
m = timm.create_model('dpn68', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. dpn68
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@misc{chen2017dual,
title={Dual Path Networks},
author={Yunpeng Chen and Jianan Li and Huaxin Xiao and Xiaojie Jin and Shuicheng Yan and Jiashi Feng},
year={2017},
eprint={1707.01629},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
dpn107 | 80.16% | 94.91% |
dpn92 | 79.99% | 94.84% |
dpn131 | 79.83% | 94.71% |
dpn98 | 79.65% | 94.61% |
dpn68b | 79.21% | 94.42% |
dpn68 | 76.31% | 92.97% |