Contrastive Representation Distillation

ICLR 2020  ·  Yonglong Tian, Dilip Krishnan, Phillip Isola ·

Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Knowledge Distillation CIFAR-100 resnet8x4 (T: resnet32x4 S: resnet8x4) Top-1 Accuracy (%) 75.51 # 13
Knowledge Distillation CIFAR-100 resnet110 (T:resnet110 S:resnet20) Top-1 Accuracy (%) 71.56 # 23
Knowledge Distillation CIFAR-100 vgg8 (T:vgg13 S:vgg8) Top-1 Accuracy (%) 74.29 # 17
Knowledge Distillation ImageNet CRD (T: ResNet-34 S:ResNet-18) Top-1 accuracy % 71.38 # 36

Methods