Semi-Supervising Learning, Transfer Learning, and Knowledge Distillation with SimCLR

2 Aug 2021  ·  Khoi Nguyen, Yen Nguyen, Bao Le ·

Recent breakthroughs in the field of semi-supervised learning have achieved results that match state-of-the-art traditional supervised learning methods. Most successful semi-supervised learning approaches in computer vision focus on leveraging huge amount of unlabeled data, learning the general representation via data augmentation and transformation, creating pseudo labels, implementing different loss functions, and eventually transferring this knowledge to more task-specific smaller models. In this paper, we aim to conduct our analyses on three different aspects of SimCLR, the current state-of-the-art semi-supervised learning framework for computer vision. First, we analyze properties of contrast learning on fine-tuning, as we understand that contrast learning is what makes this method so successful. Second, we research knowledge distillation through teacher-forcing paradigm. We observe that when the teacher and the student share the same base model, knowledge distillation will achieve better result. Finally, we study how transfer learning works and its relationship with the number of classes on different data sets. Our results indicate that transfer learning performs better when number of classes are smaller.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods