no code implementations • 29 Mar 2023 • Aapo Hyvarinen, Ilyes Khemakhem, Hiroshi Morioka
A central problem in unsupervised deep learning is how to find useful representations of high-dimensional data, sometimes called "disentanglement".
1 code implementation • 23 Jan 2023 • Omar Chehab, Alexandre Gramfort, Aapo Hyvarinen
Nevertheless, we soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
1 code implementation • 2 Mar 2022 • Omar Chehab, Alexandre Gramfort, Aapo Hyvarinen
Learning a parametric model of a data distribution is a well-known statistical problem that has seen renewed interest as it is brought to scale in deep learning.
1 code implementation • NeurIPS 2021 • Hermanni Hälvä, Sylvain Le Corff, Luc Lehéricy, Jonathan So, Yongjie Zhu, Elisabeth Gassiat, Aapo Hyvarinen
We introduce a new general identifiable framework for principled disentanglement referred to as Structured Nonlinear Independent Component Analysis (SNICA).
2 code implementations • 18 Jul 2020 • Ricardo Pio Monti, Ilyes Khemakhem, Aapo Hyvarinen
We posit that autoregressive flow models are well-suited to performing a range of causal inference tasks - ranging from causal discovery to making interventional and counterfactual predictions.
no code implementations • 15 May 2019 • Takeru Matsuda, Masatoshi Uehara, Aapo Hyvarinen
However, model selection methods for general non-normalized models have not been proposed so far.
no code implementations • 19 Apr 2019 • Ricardo Pio Monti, Kun Zhang, Aapo Hyvarinen
We consider the problem of inferring causal relationships between two or more passively observed variables.
no code implementations • 6 Mar 2019 • Saeed Saremi, Aapo Hyvarinen
Kernel density is viewed symbolically as $X\rightharpoonup Y$ where the random variable $X$ is smoothed to $Y= X+N(0,\sigma^2 I_d)$, and empirical Bayes is the machinery to denoise in a least-squares sense, which we express as $X \leftharpoondown Y$.
1 code implementation • 22 May 2018 • Aapo Hyvarinen, Hiroaki Sasaki, Richard E. Turner
Here, we propose a general framework for nonlinear ICA, which, as a special case, can make use of temporal structure.
no code implementations • 19 May 2018 • Takeru Matsuda, Aapo Hyvarinen
Then, based on the observation that conventional classification learning with neural networks is implicitly assuming an exponential family as a generative model, we introduce a method for clustering unlabeled data by estimating a finite mixture of distributions in an exponential family.
2 code implementations • NeurIPS 2016 • Aapo Hyvarinen, Hiroshi Morioka
Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable.
no code implementations • 9 Aug 2014 • Shohei Shimizu, Aapo Hyvarinen, Yoshinobu Kawahara
Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables.
no code implementations • 8 Jul 2013 • Kun Zhang, Heng Peng, Laiwan Chan, Aapo Hyvarinen
Model selection based on classical information criteria, such as BIC, is generally computationally demanding, but its properties are well studied.
no code implementations • 29 Mar 2013 • Tatsuya Tashiro, Shohei Shimizu, Aapo Hyvarinen, Takashi Washio
In this paper, we propose a new algorithm for learning causal orders that is robust against one typical violation of the model assumptions: latent confounders.