no code implementations • 19 Jan 2020 • Hui Zhu, Zhulin An, Kaiqiang Xu, Xiaolong Hu, Yongjun Xu
Existing approaches to improve the performances of convolutional neural networks by optimizing the local architectures or deepening the networks tend to increase the size of models significantly.
no code implementations • 4 Sep 2019 • Hui Zhu, Zhulin An, Chuanguang Yang, Xiaolong Hu, Kaiqiang Xu, Yongjun Xu
In this paper, we propose a method for efficient automatic architecture search which is special to the widths of networks instead of the connections of neural architecture.
1 code implementation • 26 Aug 2019 • Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu
We propose a simple yet effective method to reduce the redundancy of DenseNet by substantially decreasing the number of stacked modules by replacing the original bottleneck by our SMG module, which is augmented by local residual.
Ranked #60 on Image Classification on CIFAR-10
1 code implementation • 10 May 2019 • Hui Zhu, Zhulin An, Chuanguang Yang, Kaiqiang Xu, Erhu Zhao, Yongjun Xu
Latest algorithms for automatic neural architecture search perform remarkable but are basically directionless in search space and computational expensive in training of every intermediate architecture.