1 code implementation • 9 Jan 2024 • Youngjae Cho, HeeSun Bae, Seungjae Shin, Yeo Dong Youn, Weonyoung Joo, Il-Chul Moon
This paper presents a Bayesian-based framework of prompt learning, which could alleviate the overfitting issues on few-shot learning application and increase the adaptability of prompts on unseen instances.
1 code implementation • 8 Mar 2023 • Seungjae Shin, HeeSun Bae, DongHyeok Shin, Weonyoung Joo, Il-Chul Moon
Training neural networks on a large dataset requires substantial computational costs.
1 code implementation • 15 Feb 2021 • Dongjun Kim, Kyungwoo Song, Seungjae Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo
A simulation is useful when the phenomenon of interest is either expensive to regenerate or irreproducible with the same context.
no code implementations • 24 Nov 2020 • Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon
Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon
Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.
1 code implementation • 15 Oct 2020 • Dongjun Kim, Kyungwoo Song, YoonYeong Kim, Yongjin Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo
This paper introduces a new sampling approach, called Neural Proposal (NP), of the simulation input that resolves the biased data collection as it guarantees the i. i. d.
no code implementations • 13 Apr 2020 • Dongjun Kim, Weonyoung Joo, Seungjae Shin, Kyungwoo Song, Il-Chul Moon
Generative Adversarial Network (GAN) can be viewed as an implicit estimator of a data distribution, and this perspective motivates using the adversarial concept in the true input parameter estimation of black-box generators.
no code implementations • 7 Apr 2020 • Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon
Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.
no code implementations • 4 Mar 2020 • Weonyoung Joo, Dongjun Kim, Seungjae Shin, Il-Chul Moon
Stochastic gradient estimators of discrete random variables are widely explored, for example, Gumbel-Softmax reparameterization trick for Bernoulli and categorical distributions.
no code implementations • 15 Nov 2019 • Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon
This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics.
1 code implementation • ICLR 2019 • Weonyoung Joo, Wonsung Lee, Sungrae Park, Il-Chul Moon
The experimental results show that 1) DirVAE models the latent representation result with the best log-likelihood compared to the baselines; and 2) DirVAE produces more interpretable latent values with no collapsing issues which the baseline models suffer from.