Perturbation theory approach to study the latent space degeneracy of Variational Autoencoders

10 Jul 2019  ·  Helena Andrés-Terré, Pietro Lió ·

The use of Variational Autoencoders in different Machine Learning tasks has drastically increased in the last years. They have been developed as denoising, clustering and generative tools, highlighting a large potential in a wide range of fields. Their embeddings are able to extract relevant information from highly dimensional inputs, but the converged models can differ significantly and lead to degeneracy on the latent space. We leverage the relation between theoretical physics and machine learning to explain this behaviour, and introduce a new approach to correct for degeneration by using perturbation theory. The re-formulation of the embedding as multi-dimensional generative distribution, allows mapping to a new set of functions and their corresponding energy spectrum. We optimise for a perturbed Hamiltonian, with an additional energy potential that is related to the unobserved topology of the data. Our results show the potential of a new theoretical approach that can be used to interpret the latent space and generative nature of unsupervised learning, while the energy landscapes defined by the perturbations can be further used for modelling and dynamical purposes.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here