Interpretable Few-Shot Learning via Linear Distillation

13 Jun 2019  ·  Arip Asadulaev, Igor Kuznetsov, Andrey Filchenkov ·

It is important to develop mathematically tractable models than can interpret knowledge extracted from the data and provide reasonable predictions. In this paper, we present a Linear Distillation Learning, a simple remedy to improve the performance of linear neural networks. Our approach is based on using a linear function for each class in a dataset, which is trained to simulate the output of a teacher linear network for each class separately. We tested our model on MNIST and Omniglot datasets in the Few-Shot learning manner. It showed better results than other interpretable models such as classical Logistic Regression.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods