no code implementations • 14 Jun 2021 • Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon
On MNIST dataset, our proposed student SNN achieves up to 0. 09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method.