Search Results for author: Yunji Kim

Found 9 papers, 4 papers with code

STELLA: Continual Audio-Video Pre-training with Spatio-Temporal Localized Alignment

no code implementations12 Oct 2023 Jaewoo Lee, Jaehong Yoon, Wonjae Kim, Yunji Kim, Sung Ju Hwang

Continuously learning a variety of audio-video semantics over time is crucial for audio-related reasoning tasks in our ever-evolving world.

Continual Learning Representation Learning +1

Dense Text-to-Image Generation with Attention Modulation

1 code implementation ICCV 2023 Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, Jun-Yan Zhu

To address this, we propose DenseDiffusion, a training-free method that adapts a pre-trained text-to-image model to handle such dense captions while offering control over the scene layout.

Text-to-Image Generation

Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization

no code implementations30 May 2023 Doyeon Kim, Eunji Ko, Hyunsu Kim, Yunji Kim, Junho Kim, Dongchan Min, Junmo Kim, Sung Ju Hwang

Portrait stylization, which translates a real human face image into an artistically stylized image, has attracted considerable interest and many prior works have shown impressive quality in recent years.

Translation

Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models

no code implementations25 May 2023 Jooyoung Choi, Yunjey Choi, Yunji Kim, Junho Kim, Sungroh Yoon

Text-to-image diffusion models can generate diverse, high-fidelity images based on user-provided text prompts.

text-guided-image-editing

Mutual Information Divergence: A Unified Metric for Multimodal Generative Models

1 code implementation25 May 2022 Jin-Hwa Kim, Yunji Kim, Jiyoung Lee, Kang Min Yoo, Sang-Woo Lee

Based on a recent trend that multimodal generative evaluations exploit a vison-and-language pre-trained model, we propose the negative Gaussian cross-mutual information using the CLIP features as a unified metric, coined by Mutual Information Divergence (MID).

Hallucination Pair-wise Detection (1-ref) Hallucination Pair-wise Detection (4-ref) +5

Contrastive Fine-grained Class Clustering via Generative Adversarial Networks

1 code implementation ICLR 2022 Yunji Kim, Jung-Woo Ha

Specifically, we map the input of a generator, which was sampled from the categorical distribution, to the embedding space of the discriminator and let them act as a cluster centroid.

Clustering Contrastive Learning

Unsupervised Keypoint Learning for Guiding Class-Conditional Video Prediction

1 code implementation NeurIPS 2019 Yunji Kim, Seonghyeon Nam, In Cho, Seon Joo Kim

To generate future frames, we first detect keypoints of a moving object and predict future motion as a sequence of keypoints.

Video Prediction

Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language

no code implementations NeurIPS 2018 Seonghyeon Nam, Yunji Kim, Seon Joo Kim

Our task aims to semantically modify visual attributes of an object in an image according to the text describing the new visual appearance.

Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.