( Image credit: Albumentations )
In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch.
Ranked #2 on Image Classification on SVHN
On LibriSpeech, we achieve 6. 8% WER on test-other without the use of a language model, and 5. 8% WER with shallow fusion with a language model.
Ranked #2 on Speech Recognition on Hub5'00 SwitchBoard (SwitchBoard metric)
Contrastive self-supervised learning (CSL) is an approach to learn useful representations by solving a pretext task that selects and compares anchor, negative and positive (APN) features from an unlabeled dataset.
Ranked #23 on Image Classification on STL-10
We provide examples of image augmentations for different computer vision tasks and show that Albumentations is faster than other commonly used image augmentation tools on the most of commonly used image transformations.
Additionally, due to the separate search phase, these approaches are unable to adjust the regularization strength based on model or dataset size.
Ranked #1 on Image Classification on SVHN
In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN).
Ranked #3 on Image Classification on Fashion-MNIST
For example, on the COCO object detection dataset, pre-training benefits when we use one fifth of the labeled data, and hurts accuracy when we use all labeled data.
Ranked #1 on Semantic Segmentation on PASCAL VOC 2012 test (using extra training data)
During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher.
Ranked #3 on Image Classification on ImageNet (using extra training data)