Intra-layer Neural Architecture Search

1 Jan 2021  ·  Dong Kai Wang, Nam Sung Kim ·

We propose an efficient neural architecture search (NAS) algorithm with a flexible search space that encompasses layer operations down to individual weights. This work addresses NAS challenges in a search space of weight connections within layers, specifically the large number of architecture variations compared to a high-level search space with predetermined layer types. Our algorithm continuously evolves network architecture by adding new candidate parameters (weights and biases) using a first-order estimation based on their gradients at 0. Training is decoupled into alternating steps: adjusting network weights holding architecture constant, and adjusting network architecture holding weights constant. We explore additional applications by extend this method for multi-task learning with shared parameters. On the CIFAR-10 dataset, our evolved network achieves an accuracy of 97.42\% with 5M parameters, and 93.75\% with 500K parameters. On the ImageNet dataset, we achieve 76.6\% top-1 and 92.5\% top-5 accuracy with a search restriction of 8.5M parameters.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here