1 code implementation • 12 Dec 2023 • Chengting Yu, Fengzhao Zhang, Hanzhi Ma, Aili Wang, Erping Li
Traditional end-to-end (E2E) training of deep networks necessitates storing intermediate activations for back-propagation, resulting in a large memory footprint on GPUs and restricted model parallelization.
1 code implementation • 11 Oct 2022 • Chengting Yu, Zheming Gu, Da Li, Gaoang Wang, Aili Wang, Erping Li
We show that endowing synaptic models with temporal dependencies can improve the performance of SNNs on classification tasks.
Ranked #4 on Audio Classification on SHD
1 code implementation • 21 Apr 2022 • Chengting Yu, Yangkai Du, Mufeng Chen, Aili Wang, Gaoang Wang, Erping Li
For plasticity, we propose a trainable convolutional synapse that models spike response current to enhance the diversity of spiking neurons for temporal feature extraction.
Ranked #7 on Audio Classification on SHD
1 code implementation • 28 Mar 2022 • Tianning Zhang, Tianqi Chen, Erping Li, Bo Yang, L. K. Ang
The tensor network, as a facterization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction and stacking.
no code implementations • 24 Feb 2022 • Tianning Zhang, Yee Sin Ang, Erping Li, Chun Yun Kee, L. K. Ang
For metasurfaces, it is difficult to make quantitative comparisons between different ML models without having a common and yet complex dataset used in many disciplines like image classification.