CATE: Computation-aware Neural Architecture Encoding with Transformers

14 Feb 2021  ·  Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang ·

Recent works (White et al., 2020a; Yan et al., 2020) demonstrate the importance of architecture encodings in Neural Architecture Search (NAS). These encodings encode either structure or computation information of the neural architectures. Compared to structure-aware encodings, computation-aware encodings map architectures with similar accuracies to the same region, which improves the downstream architecture search performance (Zhang et al., 2019; White et al., 2020a). In this work, we introduce a Computation-Aware Transformer-based Encoding method called CATE. Different from existing computation-aware encodings based on fixed transformation (e.g. path encoding), CATE employs a pairwise pre-training scheme to learn computation-aware encodings using Transformers with cross-attention. Such learned encodings contain dense and contextualized computation information of neural architectures. We compare CATE with eleven encodings under three major encoding-dependent NAS subroutines in both small and large search spaces. Our experiments show that CATE is beneficial to the downstream search, especially in the large search space. Moreover, the outside search space experiment demonstrates its superior generalization ability beyond the search space on which it was trained. Our code is available at: https://github.com/MSU-MLSys-Lab/CATE.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Architecture Search CIFAR-10 CATE Top-1 Error Rate 2.46% # 15
Search Time (GPU days) 10.3 # 28
Parameters 4.1 # 4
Neural Architecture Search CIFAR-10 Image Classification CATE Percentage error 2.46 # 11
Params 4.1 # 3
Search Time (GPU days) 10.3 # 2

Methods


No methods listed for this paper. Add relevant methods here