Search Results for author: Weng-Fai Wong

Found 10 papers, 3 papers with code

HyperSNN: A new efficient and robust deep learning model for resource constrained control applications

no code implementations16 Aug 2023 Zhanglu Yan, Shida Wang, Kaiwen Tang, Weng-Fai Wong

In light of the increasing adoption of edge computing in areas such as intelligent furniture, robotics, and smart homes, this paper introduces HyperSNN, an innovative method for control tasks that uses spiking neural networks (SNNs) in combination with hyperdimensional computing.

Acrobot Edge-computing +1

DeepFire2: A Convolutional Spiking Neural Network Accelerator on FPGAs

no code implementations9 May 2023 Myat Thu Linn Aung, Daniel Gerlinghoff, Chuping Qu, Liwei Yang, Tian Huang, Rick Siow Mong Goh, Tao Luo, Weng-Fai Wong

Brain-inspired spiking neural networks (SNNs) replace the multiply-accumulate operations of traditional neural networks by integrate-and-fire neurons, with the goal of achieving greater energy efficiency.

Efficient Hyperdimensional Computing

1 code implementation26 Jan 2023 Zhanglu Yan, Shida Wang, Kaiwen Tang, Weng-Fai Wong

Hyperdimensional computing (HDC) is a method to perform classification that uses binary vectors with high dimensions and the majority rule.

Image Classification

Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer Spiking Neural Networks based on Spike-Timing-Dependent Plasticity

1 code implementation10 Nov 2022 Daniel Gerlinghoff, Tao Luo, Rick Siow Mong Goh, Weng-Fai Wong

Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks when resource efficiency and computational complexity are of importance.

Low Latency Conversion of Artificial Neural Network Models to Rate-encoded Spiking Neural Networks

no code implementations27 Oct 2022 Zhanglu Yan, Jun Zhou, Weng-Fai Wong

The maximum number of spikes in this time window is also the latency of the network in performing a single inference, as well as determines the overall energy efficiency of the model.

Optimizing for In-memory Deep Learning with Emerging Memory Technology

no code implementations1 Dec 2021 Zhehui Wang, Tao Luo, Rick Siow Mong Goh, Wei zhang, Weng-Fai Wong

In-memory deep learning has already demonstrated orders of magnitude higher performance density and energy efficiency.

DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications

no code implementations25 May 2021 Tao Luo, Wai Teng Tang, Matthew Kay Fei Lee, Chuping Qu, Weng-Fai Wong, Rick Goh

DTNN achieved significant energy saving (19. 4X and 64. 9X improvement on ResNet-18 and VGG-11 with ImageNet, respectively) with negligible loss of accuracy.

Quantization

Shenjing: A low power reconfigurable neuromorphic accelerator with partial-sum and spike networks-on-chip

1 code implementation25 Nov 2019 Bo Wang, Jun Zhou, Weng-Fai Wong, Li-Shiuan Peh

We show that conventional artificial neural networks (ANN) such as multilayer perceptron, convolutional neural networks, as well as the latest residual neural networks can be mapped successfully onto Shenjing, realizing ANNs with SNN's energy efficiency.

Cannot find the paper you are looking for? You can Submit a new open access paper.