Search Results for author: Minyong Cho

Found 2 papers, 1 papers with code

Knowledge Extraction with No Observable Data

1 code implementation NeurIPS 2019 Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang

Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.

Data-free Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.