Neural Architecture Search

774 papers with code • 26 benchmarks • 27 datasets

Neural architecture search (NAS) is a technique for automating the design of artificial neural networks (ANN), a widely used model in the field of machine learning. NAS essentially takes the process of a human manually tweaking a neural network and learning what works well, and automates this task to discover more complex architectures.

Image Credit : NAS with Reinforcement Learning

Libraries

Use these libraries to find Neural Architecture Search models and implementations
10 papers
2,917
6 papers
1,546
See all 24 libraries.

Most implemented papers

Proximal Policy Optimization Algorithms

labmlai/annotated_deep_learning_paper_implementations 20 Jul 2017

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent.

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

tensorflow/tpu ICML 2019

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.

Searching for MobileNetV3

tensorflow/models ICCV 2019

We achieve new state of the art results for mobile classification, detection and segmentation.

DARTS: Differentiable Architecture Search

quark0/darts ICLR 2019

This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.

Efficient Neural Architecture Search via Parameter Sharing

google-research/google-research 9 Feb 2018

The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set.

MnasNet: Platform-Aware Neural Architecture Search for Mobile

tensorflow/tpu CVPR 2019

In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

MIT-HAN-LAB/ProxylessNAS ICLR 2019

We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set.

EfficientNetV2: Smaller Models and Faster Training

google/automl 1 Apr 2021

By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87. 3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2. 0% accuracy while training 5x-11x faster using the same computing resources.

Progressive Neural Architecture Search

tensorflow/models ECCV 2018

We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms.

Learning Transferable Architectures for Scalable Image Recognition

tensorflow/models CVPR 2018

In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture".