Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

ICML 2019 Francesco LocatelloStefan BauerMario LucicGunnar RätschSylvain GellyBernhard SchölkopfOlivier Bachem

The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look at recent progress in the field and challenge some common assumptions... (read more)

PDF Abstract

Evaluation results from the paper

  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.