Search Results for author: Yeongtak Oh

Found 3 papers, 1 papers with code

Efficient Diffusion-Driven Corruption Editor for Test-Time Adaptation

no code implementations16 Mar 2024 Yeongtak Oh, Jonghyun Lee, Jooyoung Choi, Dahuin Jung, Uiwon Hwang, Sungroh Yoon

To address this, we propose a novel TTA method by leveraging a latent diffusion model (LDM) based image editing model and fine-tuning it with our newly introduced corruption modeling scheme.

Data Augmentation Test-time Adaptation

On mitigating stability-plasticity dilemma in CLIP-guided image morphing via geodesic distillation loss

1 code implementation19 Jan 2024 Yeongtak Oh, Saehyung Lee, Uiwon Hwang, Sungroh Yoon

Large-scale language-vision pre-training models, such as CLIP, have achieved remarkable text-guided image morphing results by leveraging several unconditional generative models.

Image Morphing

ControlDreamer: Stylized 3D Generation with Multi-View ControlNet

no code implementations2 Dec 2023 Yeongtak Oh, Jooyoung Choi, Yongsung Kim, MinJun Park, Chaehun Shin, Sungroh Yoon

Recent advancements in text-to-3D generation have significantly contributed to the automation and democratization of 3D content creation.

3D Generation text-guided-generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.