no code implementations • 6 Dec 2023 • SeungHwan An, Sungchul Hong, Jong-June Jeon
This measure enables us to capture both marginal and joint distributional information simultaneously, as it incorporates a mixture measure with point masses on standard basis vectors.
no code implementations • 25 Oct 2023 • SeungHwan An, Jong-June Jeon
The assumption of conditional independence among observed variables, primarily used in the Variational Autoencoder (VAE) decoder modeling, has limitations when dealing with high-dimensional datasets or complex correlation structures among observed variables.
no code implementations • 2 Mar 2023 • Sungchul Hong, Jong-June Jeon
The optimality of allocating assets has been widely discussed with the theoretical analysis of risk measures.
no code implementations • 28 Feb 2023 • Sunghcul Hong, Yunjin Choi, Jong-June Jeon
In real data analysis, we use the Han River dataset from 2016 to 2021, compare the proposed model with deep learning models, and confirm that our model provides an interpretable and consistent model with prior knowledge, such as a seasonality arising from the tidal force.
1 code implementation • 23 Feb 2023 • SeungHwan An, Kyungwoo Song, Jong-June Jeon
We present a new supervised learning technique for the Variational AutoEncoder (VAE) that allows it to learn a causally disentangled representation and generate causally disentangled outcomes simultaneously.
1 code implementation • NeurIPS 2023 • SeungHwan An, Jong-June Jeon
The Gaussianity assumption has been consistently criticized as a main limitation of the Variational Autoencoder (VAE) despite its efficiency in computational modeling.
1 code implementation • NeurIPS 2023 • Changdae Oh, Junhyuk So, Hoyoon Byun, Yongtaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song
Such a lack of alignment and uniformity might restrict the transferability and robustness of embeddings.
1 code implementation • 23 May 2021 • SeungHwan An, Hosik Choi, Jong-June Jeon
To improve the performance of our VAE in a classification task without the loss of performance as a generative model, we employ a new semi-supervised classification method called SCI (Soft-label Consistency Interpolation).
no code implementations • 21 Dec 2018 • Jong-June Jeon, Yongdai Kim, Sungho Won, Hosik Choi
To reflect these characteristics, a specific regularized regression model with linear constraints is commonly used.