Paper

Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift

A novel variational inference based resampling framework is proposed to evaluate the robustness and generalization capability of deep learning models with respect to distribution shift. We use Auto Encoding Variational Bayes to find a latent representation of the data, on which a Variational Gaussian Mixture Model is applied to deliberately create distribution shift by dividing the dataset into different clusters. Wasserstein distance is used to characterize the extent of distribution shift between the generated data splits. We compare several popular Convolutional Neural Network (CNN) architectures and Bayesian CNN models for image classification on the Fashion-MNIST dataset, to assess their robustness and generalization behavior under the deliberately created distribution shift, as well as under random Cross Validation. Our method of creating artificial domain splits of a single dataset can also be used to establish novel model selection criteria and assessment tools in machine learning, as well as benchmark methods for domain adaptation and domain generalization approaches.

Results in Papers With Code
(↓ scroll down to see all results)