no code implementations • 22 Mar 2024 • Geon Yeong Park, Hyeonho Jeong, Sang Wan Lee, Jong Chul Ye
The evolution of diffusion models has greatly impacted video generation and understanding.
no code implementations • 18 Mar 2024 • Hyeonho Jeong, Jinho Chang, Geon Yeong Park, Jong Chul Ye
Text-driven diffusion-based video editing presents a unique challenge not encountered in image editing literature: establishing real-world motion.
no code implementations • 1 Dec 2023 • Hyeonho Jeong, Geon Yeong Park, Jong Chul Ye
Text-to-video diffusion models have advanced video generation significantly.
1 code implementation • 2 Oct 2023 • Hyeonho Jeong, Jong Chul Ye
However, when confronted with the complexities of multi-attribute editing scenarios, they exhibit shortcomings such as omitting or overlooking intended attribute changes, modifying the wrong elements of the input video, and failing to preserve regions of the input video that should remain intact.
no code implementations • 28 Aug 2023 • YeongHyeon Park, Sungho Kang, Myung Jin Kim, Hyeonho Jeong, Hyunkyu Park, Hyeong Seok Kim, Juneho Yi
In contrast, we note that containing of generalization ability in reconstruction can also be obtained simply from steep-shaped loss landscape.
no code implementations • 8 Feb 2023 • Hyeonho Jeong, Gihyun Kwon, Jong Chul Ye
Recent advancements in large scale text-to-image models have opened new possibilities for guiding the creation of images through human-devised natural language.