no code implementations • ECCV 2020 • Haoang Li, Pyojin Kim, Ji Zhao, Kyungdon Joo, Zhipeng Cai, Zhe Liu , Yun-hui Liu
In Atlanta world, given a set of image lines, we aim to cluster them by the unknown-but-sought VPs whose number is unknown.
no code implementations • 14 Jul 2022 • Jungha Kim, Minkyeong Song, Yeoeun Lee, Moonkyeong Jung, Pyojin Kim
Commercial visual-inertial odometry (VIO) systems have been gaining attention as cost-effective, off-the-shelf six degrees of freedom (6-DoF) ego-motion tracking methods for estimating accurate and consistent camera pose data, in addition to their ability to operate without external localization from motion capture or global positioning systems.
no code implementations • CVPR 2021 • Haoang Li, Kai Chen, Ji Zhao, Jiangliu Wang, Pyojin Kim, Zhe Liu, Yun-hui Liu
In contrast, we propose the first approach suitable for both structured and unstructured scenes.
1 code implementation • 18 May 2021 • Sachini Herath, Saghar Irandoust, Bowen Chen, Yiming Qian, Pyojin Kim, Yasutaka Furukawa
The paper proposes a multi-modal sensor fusion algorithm that fuses WiFi, IMU, and floorplan information to infer an accurate and dense location history in indoor environments.
no code implementations • ICCV 2021 • Haoang Li, Kai Chen, Pyojin Kim, Kuk-Jin Yoon, Zhe Liu, Kyungdon Joo, Yun-hui Liu
Based on this map, we can detect all the VPs.
no code implementations • 18 Sep 2020 • Haram Kim, Pyojin Kim, H. Jin Kim
The proposed algorithm allows to separate the moving object detection and visual odometry (VO) so that an arbitrary robust VO method can be employed in a dynamic situation with a combination of moving object detection, whereas other VO algorithms for a dynamic environment are inseparable.
no code implementations • ECCV 2018 • Pyojin Kim, Brian Coltin, H. Jin Kim
We propose a new formulation for including orthogonal planar features as a global model into a linear SLAM approach based on sequential Bayesian filtering.
no code implementations • CVPR 2018 • Pyojin Kim, Brian Coltin, H. Jin Kim
We propose a novel approach to estimate the three degrees of freedom (DoF) drift-free rotational motion of an RGB-D camera from only a single line and plane in the Manhattan world (MW).