no code implementations • 1 Jan 2021 • Xu Liao, Jin Liu, Tianwen Wen, Yuling Jiao, Jian Huang
At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN).
Ranked #4 on Image Classification on Kuzushiji-MNIST
no code implementations • 1 Jan 2021 • Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu
We provide strong statistical guarantees for the learned representation by establishing an upper bound on the excess error of the objective function and show that it reaches the nonparametric minimax rate under mild conditions.
1 code implementation • 10 Jun 2020 • Jian Huang, Yuling Jiao, Xu Liao, Jin Liu, Zhou Yu
We propose a deep dimension reduction approach to learning representations with these characteristics.