1 code implementation • ICCV 2023 • Junho Kim, Eun Sun Lee, Young Min Kim
While panoramic images can easily capture the surrounding context from commodity devices, the estimated depth shares the limitations of conventional image-based depth estimation; the performance deteriorates under large domain shifts and the absolute values are still ambiguous to infer from 2D observations.
no code implementations • 29 Nov 2022 • Eun Sun Lee, Junho Kim, SangWon Park, Young Min Kim
We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision.
1 code implementation • 10 Aug 2022 • Sangjoon Park, Eun Sun Lee, Kyung Sook Shin, Jeong Eun Lee, Jong Chul Ye
Recent advances in vision-language models sheds a light on the long-standing problems of the oversight AI by the understanding both visual and textual concepts and their semantic correspondences.
no code implementations • 23 Mar 2022 • Hyungjin Chung, Eun Sun Lee, Jong Chul Ye
Our network, trained only with coronal knee scans, excels even on out-of-distribution in vivo liver MRI data, contaminated with complex mixture of noise.
no code implementations • 6 Dec 2021 • Jaeyoung Huh, Shujaat Khan, Sungjin Choi, Dongkuk Shin, Eun Sun Lee, Jong Chul Ye
In contrast to 2-D ultrasound (US) for uniaxial plane imaging, a 3-D US imaging system can visualize a volume along three axial planes.
no code implementations • 14 Oct 2021 • Eun Sun Lee, Junho Kim, Young Min Kim
We propose a light-weight, self-supervised adaptation for a visual navigation agent to generalize to unseen environment.
1 code implementation • 14 Oct 2021 • Junho Kim, Eun Sun Lee, MinGi Lee, Donsu Zhang, Young Min Kim
We present SGoLAM, short for simultaneous goal localization and mapping, which is a simple and efficient algorithm for Multi-Object Goal navigation.