no code implementations • 10 Jun 2020 • Chris Finlay, Augusto Gerolin, Adam M. Oberman, Aram-Alexandre Pooladian
We approach the problem of learning continuous normalizing flows from a dual perspective motivated by entropy-regularized optimal transport, in which continuous normalizing flows are cast as gradients of scalar potential functions.
no code implementations • 5 Jun 2020 • Anton Mallasto, Augusto Gerolin, Hà Quang Minh
As the geometries change by varying the regularization magnitude, we study the limiting cases of vanishing and infinite magnitudes, reconfirming well-known results on the limits of the Sinkhorn divergence.
no code implementations • 9 Oct 2019 • Anton Mallasto, Guido Montúfar, Augusto Gerolin
Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution.