no code implementations • 30 Nov 2020 • Jeong-Hoe Ku, Jihun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee
This paper aims to provide a selective survey about knowledge distillation(KD) framework for researchers and practitioners to take advantage of it for developing new optimized models in the deep neural network field.
no code implementations • 13 Aug 2020 • Jihun Oh, SangJeong Lee, Meejeong Park, Pooni Walagaurav, Kiseok Kwon
As a result, our proposed method achieved a top-1 accuracy of 69. 78% ~ 70. 96% in MobileNets and showed robust performance in varying network models and tasks, which is competitive to channel-wise quantization results.
1 code implementation • 29 Apr 2019 • Jihun Oh, Kyunghyun Cho, Joan Bruna
As an efficient and scalable graph neural network, GraphSAGE has enabled an inductive capability for inferring unseen nodes or graphs by aggregating subsampled local neighborhoods and by learning in a mini-batch gradient descent fashion.