no code implementations • 5 Dec 2023 • Yeji Song, Wonsik Shin, Junsoo Lee, Jeesoo Kim, Nojun Kwak
Finally, we decouple the motion from the appearance of the source video with an additional pseudo word.
no code implementations • 13 Sep 2023 • Namhyuk Ahn, Junsoo Lee, Chunggi Lee, Kunhee Kim, Daesik Kim, Seung-Hun Nam, Kibeom Hong
Recent progresses in large-scale text-to-image models have yielded remarkable accomplishments, finding various applications in art domain.
1 code implementation • ICCV 2023 • Kibeom Hong, Seogkyu Jeon, Junsoo Lee, Namhyuk Ahn, Kunhee Kim, Pilhyeon Lee, Daesik Kim, Youngjung Uh, Hyeran Byun
To deliver the artistic expression of the target style, recent studies exploit the attention mechanism owing to its ability to map the local patches of the style image to the corresponding patches of the content image.
1 code implementation • 24 May 2023 • Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn
In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model.
no code implementations • 17 May 2023 • Kwangho Lee, Patrick Kwon, Myung Ki Lee, Namhyuk Ahn, Junsoo Lee
To enable this, we introduce a landmark-parameter morphable model (LPMM), which offers control over the facial landmark domain through a set of semantic parameters.
1 code implementation • 31 Mar 2023 • Kangyeol Kim, Sunghyun Park, Junsoo Lee, Jaegul Choo
Recent remarkable improvements in large-scale text-to-image generative models have shown promising results in generating high-fidelity images.
no code implementations • 25 Oct 2022 • Youngin Cho, Junsoo Lee, Soyoung Yang, Juntae Kim, Yeojeong Park, Haneol Lee, Mohammad Azam Khan, Daesik Kim, Jaegul Choo
Existing deep interactive colorization models have focused on ways to utilize various types of interactions, such as point-wise color hints, scribbles, or natural-language texts, as methods to reflect a user's intent at runtime.
no code implementations • 21 Dec 2021 • Kangyeol Kim, Sunghyun Park, Junsoo Lee, Joonseok Lee, Sookyung Kim, Jaegul Choo, Edward Choi
In order to perform unconditional video generation, we must learn the distribution of the real-world videos.
1 code implementation • 15 Nov 2021 • Kangyeol Kim, Sunghyun Park, Jaeseong Lee, Sunghyo Chung, Junsoo Lee, Jaegul Choo
We present a novel Animation CelebHeads dataset (AnimeCeleb) to address an animation head reenactment.
no code implementations • 1 Jan 2021 • Junsoo Lee, Hojoon Lee, Inkyu Shin, Jaekyoung Bae, In So Kweon, Jaegul Choo
Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks.
1 code implementation • 16 Oct 2020 • Sunghyun Park, Kangyeol Kim, Junsoo Lee, Jaegul Choo, Joonseok Lee, Sookyung Kim, Edward Choi
Video generation models often operate under the assumption of fixed frame rates, which leads to suboptimal performance when it comes to handling flexible frame rates (e. g., increasing the frame rate of the more dynamic portion of the video as well as handling missing video frames).
no code implementations • CVPR 2020 • Junsoo Lee, Eungyeup Kim, Yunsung Lee, Dongjun Kim, Jaehyuk Chang, Jaegul Choo
However, it is difficult to prepare for a training data set that has a sufficient amount of semantically meaningful pairs of images as well as the ground truth for a colored image reflecting a given reference (e. g., coloring a sketch of an originally blue car given a reference green car).
1 code implementation • 9 Jun 2019 • Seungjoo Yoo, Hyojin Bahng, Sunghyo Chung, Junsoo Lee, Jaehyuk Chang, Jaegul Choo
Despite recent advancements in deep learning-based automatic colorization, they are still limited when it comes to few-shot learning.