no code implementations • 10 Jan 2024 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
We develop a nonparametric identifiability theory that formalizes this principle and shows that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
1 code implementation • 21 Jul 2021 • Sébastien Lachapelle, Pau Rodríguez López, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, Simon Lacoste-Julien
This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables.
1 code implementation • 30 Jul 2019 • Pau Rodríguez López, Diego Velazquez Dorta, Guillem Cucurull Preixens, Josep M. Gonfaus, F. Xavier Roca Marva, Jordi Gonzàlez Sabaté
Attention has been typically implemented in neural networks by selecting the most informative regions of the image that improve classification.