SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning

16 Jan 2021  ·  Byoungjip Kim, Jinho Choo, Yeong-Dae Kwon, Seongho Joe, Seungjai Min, Youngjune Gwon ·

This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization. SelfMatch consists of two stages: (1) self-supervised pre-training based on contrastive learning and (2) semi-supervised fine-tuning based on augmentation consistency regularization. We empirically demonstrate that SelfMatch achieves the state-of-the-art results on standard benchmark datasets such as CIFAR-10 and SVHN. For example, for CIFAR-10 with 40 labeled examples, SelfMatch achieves 93.19% accuracy that outperforms the strong previous methods such as MixMatch (52.46%), UDA (70.95%), ReMixMatch (80.9%), and FixMatch (86.19%). We note that SelfMatch can close the gap between supervised learning (95.87%) and semi-supervised learning (93.19%) by using only a few labels for each class.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Image Classification CIFAR-10, 250 Labels SelfMatch Percentage error 4.87±0.26 # 8
Semi-Supervised Image Classification CIFAR-10, 4000 Labels SelfMatch Percentage error 4.06±0.08 # 5
Semi-Supervised Image Classification CIFAR-10, 40 Labels SelfMatch Percentage error 6.81±1.08 # 12

Methods