TF EfficientNet Lite

Last updated on Feb 14, 2021

tf_efficientnet_lite0

Parameters 5 Million
FLOPs 488 Million
File Size 17.95 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, RELU6
ID tf_efficientnet_lite0
Crop Pct 0.875
Image Size 224
Interpolation bicubic
SHOW MORE
SHOW LESS
tf_efficientnet_lite1

Parameters 5 Million
FLOPs 774 Million
File Size 20.92 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, RELU6
ID tf_efficientnet_lite1
Crop Pct 0.882
Image Size 240
Interpolation bicubic
SHOW MORE
SHOW LESS
tf_efficientnet_lite2

Parameters 6 Million
FLOPs 1 Billion
File Size 23.52 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, RELU6
ID tf_efficientnet_lite2
Crop Pct 0.89
Image Size 260
Interpolation bicubic
SHOW MORE
SHOW LESS
tf_efficientnet_lite3

Parameters 8 Million
FLOPs 2 Billion
File Size 31.63 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, RELU6
ID tf_efficientnet_lite3
Crop Pct 0.904
Image Size 300
Interpolation bilinear
SHOW MORE
SHOW LESS
tf_efficientnet_lite4

Parameters 13 Million
FLOPs 5 Billion
File Size 50.12 MB
Training Data ImageNet
Training Resources
Training Time

Architecture 1x1 Convolution, Average Pooling, Convolution, Dense Connections, Dropout, Inverted Residual Block, Batch Normalization, RELU6
ID tf_efficientnet_lite4
Crop Pct 0.92
Image Size 380
Interpolation bilinear
SHOW MORE
SHOW LESS
README.md

Summary

EfficientNet is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a compound coefficient. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use $2^N$ times more computational resources, then we can simply increase the network depth by $\alpha ^ N$, width by $\beta ^ N$, and image size by $\gamma ^ N$, where $\alpha, \beta, \gamma$ are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient $\phi$ to uniformly scales network width, depth, and resolution in a principled way.

The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.

The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of MobileNetV2.

EfficientNet-Lite makes EfficientNet more suitable for mobile devices by introducing ReLU6 activation functions and removing squeeze-and-excitation blocks.

How do I load this model?

To load a pretrained model:

import timm
m = timm.create_model('tf_efficientnet_lite0', pretrained=True)
m.eval()

Replace the model name with the variant you want to use, e.g. tf_efficientnet_lite0. You can find the IDs in the model summaries at the top of this page.

How do I train this model?

You can follow the timm recipe scripts for training a new model afresh.

Citation

@misc{tan2020efficientnet,
      title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, 
      author={Mingxing Tan and Quoc V. Le},
      year={2020},
      eprint={1905.11946},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Results

Image Classification on ImageNet

Image Classification on ImageNet
MODEL TOP 1 ACCURACY TOP 5 ACCURACY
tf_efficientnet_lite4 81.54% 95.66%
tf_efficientnet_lite3 79.83% 94.91%
tf_efficientnet_lite2 77.48% 93.75%
tf_efficientnet_lite1 76.67% 93.24%
tf_efficientnet_lite0 74.83% 92.17%