no code implementations • 27 Dec 2023 • Guansong Lu, Yuanfan Guo, Jianhua Han, Minzhe Niu, Yihan Zeng, Songcen Xu, Zeyi Huang, Zhao Zhong, Wei zhang, Hang Xu
Current large-scale diffusion models represent a giant leap forward in conditional image synthesis, capable of interpreting diverse cues like text, human poses, and edges.
no code implementations • 21 Nov 2022 • Zezhou Zhu, Yucong Zhou, Zhao Zhong
Vector Quantization (VQ) is an appealing model compression method to obtain a tiny model with less accuracy loss.
1 code implementation • ICCV 2023 • Yulin Wang, Yang Yue, Rui Lu, Tianjiao Liu, Zhao Zhong, Shiji Song, Gao Huang
It is also effective for self-supervised learning (e. g., MAE).
no code implementations • 8 Jul 2021 • Yikang Zhang, Zhuo Chen, Zhao Zhong
Our method achieves the state-of-the-art performance on ImageNet, 80. 7% top-1 accuracy with 194M FLOPs.
Ranked #577 on Image Classification on ImageNet
no code implementations • ICCV 2021 • Yucong Zhou, Zezhou Zhu, Zhao Zhong
It can learn specialized activation functions and achieves SOTA performance on large-scale datasets like ImageNet and COCO.
no code implementations • 29 Mar 2021 • Yucong Zhou, Yunxiao Sun, Zhao Zhong
Based on this discovery, we propose a new training method called FixNorm, which discards weight decay and directly controls the two mechanisms.
no code implementations • NeurIPS 2020 • Yikang Zhang, Jian Zhang, Zhao Zhong
Neural network architecture design mostly focuses on the new convolutional operator or special topological structure of network block, little attention is drawn to the configuration of stacking each block, called Block Stacking Style (BSS).
no code implementations • 22 Apr 2020 • Yikang Zhang, Jian Zhang, Qiang Wang, Zhao Zhong
On one hand, we can reduce the computation cost remarkably while maintaining the performance.
no code implementations • ICLR 2020 • Xin-Yu Zhang, Qiang Wang, Jian Zhang, Zhao Zhong
The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization.
Ranked #594 on Image Classification on ImageNet
no code implementations • 24 Dec 2019 • Muyuan Fang, Qiang Wang, Zhao Zhong
Automatic neural architecture search techniques are becoming increasingly important in machine learning area.
no code implementations • 25 Sep 2019 • Kane Zhang, Jian Zhang, Qiang Wang, Zhao Zhong
To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69. 3% FLOPs while maintaining the Mean IoU on segmentation task.
1 code implementation • CVPR 2019 • Minghao Guo, Zhao Zhong, Wei Wu, Dahua Lin, Junjie Yan
Motivated by the fact that human-designed networks are elegant in topology with a fast inference speed, we propose a mirror stimuli function inspired by biological cognition theory to extract the abstract topological knowledge of an expert human-design network (ResNeXt).
no code implementations • NeurIPS 2018 • Chen Lin, Zhao Zhong, Wei Wu, Junjie Yan
Inspired by the relevant concept in neural science literature, we propose Synaptic Pruning: a data-driven method to prune connections between input and output feature maps with a newly proposed class of parameters called Synaptic Strength.
2 code implementations • 16 Aug 2018 • Zhao Zhong, Zichen Yang, Boyang Deng, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu
The block-wise generation brings unique advantages: (1) it yields state-of-the-art results in comparison to the hand-crafted networks on image classification, particularly, the best network generated by BlockQNN achieves 2. 35% top-1 error rate on CIFAR-10.
1 code implementation • CVPR 2018 • Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu
Convolutional neural networks have gained a remarkable success in computer vision.