no code implementations • 11 Mar 2022 • Zhuoran Song, Yihong Xu, Han Li, Naifeng Jing, Xiaoyao Liang, Li Jiang
The training phases of Deep neural network~(DNN) consumes enormous processing time and energy.
1 code implementation • 9 Mar 2022 • Zhuoran Song, Yihong Xu, Zhezhi He, Li Jiang, Naifeng Jing, Xiaoyao Liang
We explore the sparsity in ViT and observe that informative patches and heads are sufficient for accurate image recognition.
no code implementations • 2 Mar 2021 • Fangxin Liu, Wenbo Zhao, Yilong Zhao, Zongwu Wang, Tao Yang, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang
However, it is challenging for crossbar architecture to exploit the sparsity in the DNN.
1 code implementation • 19 Oct 2018 • Haiyue Song, Chengwen Xu, Qiang Xu, Zhuoran Song, Naifeng Jing, Xiaoyao Liang, Li Jiang
We thus propose a novel approximate computing architecture with a Multiclass-Classifier and Multiple Approximators (MCMA).
2 code implementations • 27 Jul 2018 • Zhenghao Peng, Xuyang Chen, Chengwen Xu, Naifeng Jing, Xiaoyao Liang, Cewu Lu, Li Jiang
To guarantee the approximation quality, existing works deploy two neural networks (NNs), e. g., an approximator and a predictor.