Intrinsic Universal Measurements of Non-linear Embeddings

5 Nov 2018  ·  Ke Sun ·

A basic problem in machine learning is to find a mapping $f$ from a low dimensional latent space $\mathcal{Y}$ to a high dimensional observation space $\mathcal{X}$. Modern tools such as deep neural networks are capable to represent general non-linear mappings. A learner can easily find a mapping which perfectly fits all the observations. However, such a mapping is often not considered as good, because it is not simple enough and can overfit. How to define simplicity? We try to make a formal definition on the amount of information imposed by a non-linear mapping $f$. Intuitively, we measure the local discrepancy between the pullback geometry and the intrinsic geometry of the latent space. Our definition is based on information geometry and is independent of the empirical observations, nor specific parameterizations. We prove its basic properties and discuss relationships with related machine learning methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here