1 code implementation • 1 Feb 2023 • Zeyu Zhu, Fanrong Li, Zitao Mo, Qinghao Hu, Gang Li, Zejian Liu, Xiaoyao Liang, Jian Cheng
Through an in-depth analysis of the topology of GNNs, we observe that the topology of the graph leads to significant differences between nodes, and most of the nodes in a graph appear to have a small aggregation value.
no code implementations • 30 Sep 2022 • Nanyang Ye, Jingbiao Mei, Zhicheng Fang, Yuwen Zhang, Ziqing Zhang, Huaying Wu, Xiaoyao Liang
For neural architecture search space design, instead of conducting neural architecture search on the whole feasible neural architecture search space, we first systematically explore the weight drifting tolerance of different neural network components, such as dropout, normalization, number of layers, and activation functions in which dropout is found to be able to improve the neural network robustness to weight drifting.
no code implementations • 11 Mar 2022 • Zhuoran Song, Yihong Xu, Han Li, Naifeng Jing, Xiaoyao Liang, Li Jiang
The training phases of Deep neural network~(DNN) consumes enormous processing time and energy.
1 code implementation • 9 Mar 2022 • Zhuoran Song, Yihong Xu, Zhezhi He, Li Jiang, Naifeng Jing, Xiaoyao Liang
We explore the sparsity in ViT and observe that informative patches and heads are sufficient for accurate image recognition.
1 code implementation • 15 Dec 2021 • Yu Gong, Zhihan Xu, Zhezhi He, Weifeng Zhang, Xiaobing Tu, Xiaoyao Liang, Li Jiang
From the software perspective, we mathematically and systematically model the latency and resource utilization of the proposed heterogeneous accelerator, regarding varying system design configurations.
no code implementations • 2 Mar 2021 • Fangxin Liu, Wenbo Zhao, Yilong Zhao, Zongwu Wang, Tao Yang, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang
However, it is challenging for crossbar architecture to exploit the sparsity in the DNN.
1 code implementation • ICCV 2021 • Fangxin Liu, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, Li Jiang
Model quantization has emerged as a mandatory technique for efficient inference with advanced Deep Neural Networks (DNN).
1 code implementation • 19 Oct 2018 • Haiyue Song, Chengwen Xu, Qiang Xu, Zhuoran Song, Naifeng Jing, Xiaoyao Liang, Li Jiang
We thus propose a novel approximate computing architecture with a Multiclass-Classifier and Multiple Approximators (MCMA).
2 code implementations • 27 Jul 2018 • Zhenghao Peng, Xuyang Chen, Chengwen Xu, Naifeng Jing, Xiaoyao Liang, Cewu Lu, Li Jiang
To guarantee the approximation quality, existing works deploy two neural networks (NNs), e. g., an approximator and a predictor.
no code implementations • 23 May 2018 • Zhuoran Song, Ru Wang, Dongyu Ru, Hongru Huang, Zhenghao Peng, Jing Ke, Xiaoyao Liang, Li Jiang
In this paper, we propose the Approximate Random Dropout that replaces the conventional random dropout of neurons and synapses with a regular and predefined patterns to eliminate the unnecessary computation and data access.