no code implementations • 7 Mar 2020 • Vincenzo Crescimanna, Bruce Graham
The InfoMax representation of the two objectives is not relevant only per se, since it helps to understand the role of the network capacity, but also because it allows us to derive a variational objective, the Variational InfoMax (VIM), that maximises them directly without resorting to any lower bound.
no code implementations • 25 May 2019 • Vincenzo Crescimanna, Bruce Graham
The Variational AutoEncoder (VAE) learns simultaneously an inference and a generative model, but only one of these models can be learned at optimum, this behaviour is associated to the ELBO learning objective, that is optimised by a non-informative generator.
no code implementations • 23 Jan 2019 • Vincenzo Crescimanna, Bruce Graham
IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case.