Discretized Gradient Flow for Manifold Learning in the Space of Embeddings

25 Jan 2019  ·  Dara Gold, Steven Rosenberg ·

Gradient descent, or negative gradient flow, is a standard technique in optimization to find minima of functions. Many implementations of gradient descent rely on discretized versions, i.e., moving in the gradient direction for a set step size, recomputing the gradient, and continuing. In this paper, we present an approach to manifold learning where gradient descent takes place in the infinite dimensional space $\mathcal{E} = {\rm Emb}(M,\mathbb{R}^N)$ of smooth embeddings $\phi$ of a manifold $M$ into $\mathbb{R}^N$. Implementing a discretized version of gradient descent for $P:\mathcal{E}\to {\mathbb R}$, a penalty function that scores an embedding $\phi \in \mathcal{E}$, requires estimating how far we can move in a fixed direction -- the direction of one gradient step -- before leaving the space of smooth embeddings. Our main result is to give an explicit lower bound for this step length in terms of the Riemannian geometry of $\phi(M)$. In particular, we consider the case when the gradient of $P$ is pointwise normal to the embedded manifold $\phi(M)$. We prove this case arises when $P$ is invariant under diffeomorphisms of $M$, a natural condition in manifold learning.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Differential Geometry