no code implementations • 4 Aug 2022 • Jonghu Jeong, Minyong Cho, Philipp Benz, Jinwoo Hwang, Jeewook Kim, Seungkwan Lee, Tae-hoon Kim
We further conduct a user study to qualitatively assess our defense of the reconstruction attack.
1 code implementation • NeurIPS 2019 • Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang
Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.