Efficient Video and Audio processing with Loihi 2

Loihi 2 is an asynchronous, brain-inspired research processor that generalizes several fundamental elements of neuromorphic architecture, such as stateful neuron models communicating with event-driven spikes, in order to address limitations of the first generation Loihi. Here we explore and characterize some of these generalizations, such as sigma-delta encapsulation, resonate-and-fire neurons, and integer-valued spikes, as applied to standard video, audio, and signal processing tasks. We find that these new neuromorphic approaches can provide orders of magnitude gains in combined efficiency and latency (energy-delay-product) for feed-forward and convolutional neural networks applied to video, audio denoising, and spectral transforms compared to state-of-the-art solutions.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here