Colab NAS: Obtaining lightweight task-specific convolutional neural networks following Occam's razor

15 Dec 2022  ·  Andrea Mattia Garavagno, Daniele Leonardis, Antonio Frisoli ·

The current trend of applying transfer learning from convolutional neural networks (CNNs) trained on large datasets can be an overkill when the target application is a custom and delimited problem, with enough data to train a network from scratch. On the other hand, the training of custom and lighter CNNs requires expertise, in the from-scratch case, and or high-end resources, as in the case of hardware-aware neural architecture search (HW NAS), limiting access to the technology by non-habitual NN developers. For this reason, we present ColabNAS, an affordable HW NAS technique for producing lightweight task-specific CNNs. Its novel derivative-free search strategy, inspired by Occam's razor, allows to obtain state-of-the-art results on the Visual Wake Word dataset, a standard TinyML benchmark, in just 3.1 GPU hours using free online GPU services such as Google Colaboratory and Kaggle Kernel.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Hardware Aware Neural Architecture Search Visual Wake Words Colab NAS FLOPs 4,059,567 # 1
PARAMS 12,355 # 1
Accuracy 78 # 1

Methods


No methods listed for this paper. Add relevant methods here