Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup

ICML 2020  ·  Jang-Hyun Kim, Wonho Choo, Hyun Oh Song ·

While deep neural networks achieve great performance on fitting the training distribution, the learned networks are prone to overfitting and are susceptible to adversarial attacks. In this regard, a number of mixup based augmentation methods have been recently proposed. However, these approaches mainly focus on creating previously unseen virtual examples and can sometimes provide misleading supervisory signal to the network. To this end, we propose Puzzle Mix, a mixup method for explicitly utilizing the saliency information and the underlying statistics of the natural examples. This leads to an interesting optimization problem alternating between the multi-label objective for optimal mixing mask and saliency discounted optimal transport objective. Our experiments show Puzzle Mix achieves the state of the art generalization and the adversarial robustness results compared to other mixup methods on CIFAR-100, Tiny-ImageNet, and ImageNet datasets. The source code is available at https://github.com/snu-mllab/PuzzleMix.

PDF Abstract ICML 2020 PDF
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation ACDC Scribbles Puzzle Mix Dice (Average) 62.4% # 6
Image Classification CIFAR-100 WRN28-10 Percentage correct 84.05 # 79
Image Classification ImageNet ResNet-50 Top 1 Accuracy 78.76% # 744
Hardware Burden None # 1
Operations per network pass None # 1
Image Classification Tiny-ImageNet PreActResNet18 Top 1 Accuracy 63.48 # 2

Methods