|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms.
Ranked #34 on Image Classification on ImageNet
In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture".
Ranked #35 on Image Classification on ImageNet
MobileDets also outperform MobileNetV2+SSDLite by 1. 9 mAP on mobile CPUs, 3. 7 mAP on EdgeTPUs and 3. 4 mAP on DSPs while running equally fast.
We present MorphNet, an approach to automate the design of neural network structures.
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner.
The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set.
Recent works have highlighted the strength of the Transformer architecture on sequence tasks while, at the same time, neural architecture search (NAS) has begun to outperform human-designed models.
Ranked #1 on Machine Translation on WMT2014 English-Czech
As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts.
In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search.