Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

NeurIPS 2017  ·  Antti Tarvainen, Harri Valpola ·

The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.

PDF Abstract NeurIPS 2017 PDF NeurIPS 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Image Classification CIFAR-10, 250 Labels MeanTeacher Percentage error 47.32 # 21
Semi-Supervised Image Classification CIFAR-10, 4000 Labels Mean Teacher Percentage error 6.28 # 27
Semi-Supervised Image Classification ImageNet - 10% labeled data Mean Teacher (ResNeXt-152) Top 5 Accuracy 90.89% # 21
Semi-Supervised Semantic Segmentation nuScenes MeanTeacher (Voxel) mIoU (1% Labels) 51.6 # 2
mIoU (10% Labels) 66.0 # 3
mIoU (20% Labels) 67.1 # 3
mIoU (50% Labels) 71.7 # 3
Semi-Supervised Semantic Segmentation nuScenes MeanTeacher (Range View) mIoU (1% Labels) 42.1 # 6
mIoU (10% Labels) 60.4 # 8
mIoU (20% Labels) 65.4 # 5
mIoU (50% Labels) 69.4 # 6
Semi-Supervised Semantic Segmentation ScribbleKITTI MeanTeacher (Voxel) mIoU (1% Labels) 41.0 # 2
mIoU (10% Labels) 50.1 # 5
mIoU (20% Labels) 52.8 # 4
mIoU (50% Labels) 53.9 # 6
Semi-Supervised Semantic Segmentation ScribbleKITTI MeanTeacher (Range View) mIoU (1% Labels) 34.2 # 7
mIoU (10% Labels) 49.8 # 7
mIoU (20% Labels) 51.6 # 8
mIoU (50% Labels) 53.3 # 8
Semi-Supervised Semantic Segmentation SemanticKITTI MeanTeacher (Range View) mIoU (1% Labels) 37.5 # 7
mIoU (10% Labels) 53.1 # 8
mIoU (20% Labels) 56.1 # 7
mIoU (50% Labels) 57.4 # 7
Semi-Supervised Semantic Segmentation SemanticKITTI MeanTeacher (Voxel) mIoU (1% Labels) 45.4 # 4
mIoU (10% Labels) 57.1 # 5
mIoU (20% Labels) 59.2 # 4
mIoU (50% Labels) 60.0 # 5
Semi-Supervised Image Classification SVHN, 1000 labels Mean Teacher Accuracy 96.05 # 14

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Semi-Supervised Image Classification SVHN, 250 Labels MeanTeacher Accuracy 93.55 # 11

Methods