Robust Processing-In-Memory Neural Networks via Noise-Aware Normalization

7 Jul 2020  ·  Li-Huang Tsai, Shih-Chieh Chang, Yu-Ting Chen, Jia-Yu Pan, Wei Wei, Da-Cheng Juan ·

Analog computing hardwares, such as Processing-in-memory (PIM) accelerators, have gradually received more attention for accelerating the neural network computations. However, PIM accelerators often suffer from intrinsic noise in the physical components, making it challenging for neural network models to achieve the same performance as on the digital hardware. Previous works in mitigating intrinsic noise assumed the knowledge of the noise model, and retraining the neural networks accordingly was required. In this paper, we propose a noise-agnostic method to achieve robust neural network performance against any noise setting. Our key observation is that the degradation of performance is due to the distribution shifts in network activations, which are caused by the noise. To properly track the shifts and calibrate the biased distributions, we propose a "noise-aware" batch normalization layer, which is able to align the distributions of the activations under variational noise inherent in the analog environments. Our method is simple, easy to implement, general to various noise settings, and does not need to retrain the models. We conduct experiments on several tasks in computer vision, including classification, object detection and semantic segmentation. The results demonstrate the effectiveness of our method, achieving robust performance under a wide range of noise settings, more reliable than existing methods. We believe that our simple yet general method can facilitate the adoption of analog computing devices for neural networks.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods