Identifying Coarse-grained Independent Causal Mechanisms with Self-supervision

Current approaches for learning disentangled representations assume that independent latent variables generate the data through a single data generation process. In contrast, this manuscript considers independent causal mechanisms (ICM), which, unlike disentangled representations, directly model multiple data generation processes in a coarse granularity. In this work, we aim to learn a model that isolates each mechanism and approximates the ground-truth ICM from observational data. We outline sufficient conditions under which the ICM can be learned and isolated using a single self-supervised generative model with a mixture prior, simplifying previous methods. Moreover, we implement a generative model with an identifiable structural latent space by combining the ICM with a shared latent space. We compare this ICM approach to disentangled representations on various downstream tasks, showing that the ICM is more robust to intervention, co-variant shift, and noise due to the isolation between the data generation processes.

PDF Abstract 1st Conference 2022 PDF 1st Conference 2022 Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here