Search Results for author: Weonyoung Joo

Found 11 papers, 5 papers with code

Make Prompts Adaptable: Bayesian Modeling for Vision-Language Prompt Learning with Data-Dependent Prior

1 code implementation9 Jan 2024 Youngjae Cho, HeeSun Bae, Seungjae Shin, Yeo Dong Youn, Weonyoung Joo, Il-Chul Moon

This paper presents a Bayesian-based framework of prompt learning, which could alleviate the overfitting issues on few-shot learning application and increase the adaptability of prompts on unseen instances.

Few-Shot Learning Prompt Engineering

Loss-Curvature Matching for Dataset Selection and Condensation

1 code implementation8 Mar 2023 Seungjae Shin, HeeSun Bae, DongHyeok Shin, Weonyoung Joo, Il-Chul Moon

Training neural networks on a large dataset requires substantial computational costs.

Neural Posterior Regularization for Likelihood-Free Inference

1 code implementation15 Feb 2021 Dongjun Kim, Kyungwoo Song, Seungjae Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo

A simulation is useful when the phenomenon of interest is either expensive to regenerate or irreproducible with the same context.

Bayesian Inference

Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

no code implementations24 Nov 2020 Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon

Therefore, this paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling the exogenous uncertainty into two latent variables: either 1) independent to interventions or 2) correlated to interventions without causality.

Attribute Causal Inference +3

Neutralizing Gender Bias in Word Embeddings with Latent Disentanglement and Counterfactual Generation

no code implementations Findings of the Association for Computational Linguistics 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

counterfactual Disentanglement +1

Sequential Likelihood-Free Inference with Neural Proposal

1 code implementation15 Oct 2020 Dongjun Kim, Kyungwoo Song, YoonYeong Kim, Yongjin Shin, Wanmo Kang, Il-Chul Moon, Weonyoung Joo

This paper introduces a new sampling approach, called Neural Proposal (NP), of the simulation input that resolves the biased data collection as it guarantees the i. i. d.

Bayesian Inference

Adversarial Likelihood-Free Inference on Black-Box Generator

no code implementations13 Apr 2020 Dongjun Kim, Weonyoung Joo, Seungjae Shin, Kyungwoo Song, Il-Chul Moon

Generative Adversarial Network (GAN) can be viewed as an implicit estimator of a data distribution, and this perspective motivates using the adversarial concept in the true input parameter estimation of black-box generators.

Generative Adversarial Network

Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Generation

no code implementations7 Apr 2020 Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, Il-Chul Moon

Recent research demonstrates that word embeddings, trained on the human-generated corpus, have strong gender biases in embedding spaces, and these biases can result in the discriminative results from the various downstream tasks.

counterfactual Disentanglement +2

Generalized Gumbel-Softmax Gradient Estimator for Generic Discrete Random Variables

no code implementations4 Mar 2020 Weonyoung Joo, Dongjun Kim, Seungjae Shin, Il-Chul Moon

Stochastic gradient estimators of discrete random variables are widely explored, for example, Gumbel-Softmax reparameterization trick for Bernoulli and categorical distributions.

Topic Models

Sequential Recommendation with Relation-Aware Kernelized Self-Attention

no code implementations15 Nov 2019 Mingi Ji, Weonyoung Joo, Kyungwoo Song, Yoon-Yeong Kim, Il-Chul Moon

This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics.

Relation Sequential Recommendation

Dirichlet Variational Autoencoder

1 code implementation ICLR 2019 Weonyoung Joo, Wonsung Lee, Sungrae Park, Il-Chul Moon

The experimental results show that 1) DirVAE models the latent representation result with the best log-likelihood compared to the baselines; and 2) DirVAE produces more interpretable latent values with no collapsing issues which the baseline models suffer from.

General Classification Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.