Dynamic Test-Time Augmentation via Differentiable Functions

9 Dec 2022  ·  Shohei Enomoto, Monikka Roslianna Busto, Takeharu Eda ·

Distribution shifts, which often occur in the real world, degrade the accuracy of deep learning systems, and thus improving robustness is essential for practical applications. To improve robustness, we study an image enhancement method that generates recognition-friendly images without retraining the recognition model. We propose a novel image enhancement method, DynTTA, which is based on differentiable data augmentation techniques and generates a blended image from many augmented images to improve the recognition accuracy under distribution shifts. In addition to standard data augmentations, DynTTA also incorporates deep neural network-based image transformation, which further improves the robustness. Because DynTTA is composed of differentiable functions, it is directly trained with the classification loss of the recognition model. We experiment with widely used image recognition datasets using various classification models, including Vision Transformer and MLP-Mixer. DynTTA improves the robustness with almost no reduction in classification accuracy for clean images, which is a better result than the existing methods. Furthermore, we show that estimating the training time augmentation for distribution-shifted datasets using DynTTA and retraining the recognition model with the estimated augmentation significantly improves robustness.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods