TMA: Tera-MACs/W Neural Hardware Inference Accelerator with a Multiplier-less Massive Parallel Processor

8 Sep 2019  ·  Hyunbin Park, Dohyun Kim, Shiho Kim ·

Computationally intensive Inference tasks of Deep neural networks have enforced revolution of new accelerator architecture to reduce power consumption as well as latency. The key figure of merit in hardware inference accelerators is the number of multiply-and-accumulation operations per watt (MACs/W), where, the state-of-the-arts MACs/W remains several hundreds Giga-MACs/W. We propose a Tera-MACS/W neural hardware inference Accelerator (TMA) with 8-bit activations and scalable integer weights less than 1-byte. The architectures main feature is configurable neural processing element for matrix-vector operations. The proposed neural processing element has Multiplier-less Massive Parallel Processor to work without any multiplications, which makes it attractive for energy efficient high-performance neural network applications. We benchmark our systems latency, power, and performance using Alexnet trained on ImageNet. Finally, we compared our accelerators throughput and power consumption to the prior works. The proposed accelerator outperforms the state of the art in terms of energy and area achieving 2.3 TMACS/W@1.0 V, 65 nm CMOS technology.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Distributed, Parallel, and Cluster Computing Hardware Architecture Signal Processing

Datasets