Paper

Interpretable Few-Shot Learning via Linear Distillation

It is important to develop mathematically tractable models than can interpret knowledge extracted from the data and provide reasonable predictions. In this paper, we present a Linear Distillation Learning, a simple remedy to improve the performance of linear neural networks. Our approach is based on using a linear function for each class in a dataset, which is trained to simulate the output of a teacher linear network for each class separately. We tested our model on MNIST and Omniglot datasets in the Few-Shot learning manner. It showed better results than other interpretable models such as classical Logistic Regression.

Results in Papers With Code
(↓ scroll down to see all results)