no code implementations • 19 Mar 2024 • Juan D. Galvis, Xingxing Zuo, Simon Schaefer, Stefan Leutengger
This paper introduces a 3D shape completion approach using a 3D latent diffusion model optimized for completing shapes, represented as Truncated Signed Distance Functions (TSDFs), from partial 3D scans.
1 code implementation • 13 Mar 2024 • Connor Lee, Matthew Anderson, Nikhil Raganathan, Xingxing Zuo, Kevin Do, Georgia Gkioxari, Soon-Jo Chung
We present the first publicly available RGB-thermal dataset designed for aerial robotics operating in natural environments.
1 code implementation • 3 Feb 2024 • Han Li, Yukai Ma, Yuehao Huang, Yaqing Gu, Weihua Xu, Yong liu, Xingxing Zuo
Dense depth recovery is crucial in autonomous driving, serving as a foundational element for obstacle avoidance, 3D object detection, and local path planning.
no code implementations • 22 Jan 2024 • Mengmeng Wang, Jiazheng Xing, Boyuan Jiang, Jun Chen, Jianbiao Mei, Xingxing Zuo, Guang Dai, Jingdong Wang, Yong liu
In this paper, we introduce a novel Multimodal, Multi-task CLIP adapting framework named \name to address these challenges, preserving both high supervised performance and robust transferability.
1 code implementation • 9 Jan 2024 • Han Li, Yukai Ma, Yaqing Gu, Kewei Hu, Yong liu, Xingxing Zuo
To circumvent this issue, we learn to augment versatile and robust monocular depth prediction with the dense metric scale induced from sparse and noisy Radar data.
no code implementations • 3 Jan 2024 • Xingxing Zuo, Pouya Samangouei, Yunwen Zhou, Yan Di, Mingyang Li
This is achieved by distilling feature maps generated from image-based foundation models into those rendered from our 3D model.
no code implementations • 20 Dec 2023 • Jens Naumann, Binbin Xu, Stefan Leutenegger, Xingxing Zuo
We introduce a novel monocular visual odometry (VO) system, NeRF-VO, that integrates learning-based sparse visual odometry for low-latency camera tracking and a neural radiance scene representation for sophisticated dense reconstruction and novel view synthesis.
no code implementations • 8 Dec 2023 • Hanfeng Wu, Xingxing Zuo, Stefan Leutenegger, Or Litany, Konrad Schindler, Shengyu Huang
We introduce DyNFL, a novel neural field-based approach for high-fidelity re-simulation of LiDAR scans in dynamic driving scenes.
no code implementations • 14 Jun 2023 • Yingye Xin, Xingxing Zuo, Dongyue Lu, Stefan Leutenegger
The sparse depth from VIO is firstly completed by a single-view depth completion network.
no code implementations • 24 May 2023 • Xingxing Zuo, Nan Yang, Nathaniel Merrill, Binbin Xu, Stefan Leutenegger
Incrementally recovering 3D dense structures from monocular videos is of paramount importance since it enables various robotics and AR applications.
no code implementations • 16 May 2023 • Mengmeng Wang, Teli Ma, Xingxing Zuo, Jiajun Lv, Yong liu
Additionally, considering the sparsity characteristics of the point clouds, we design a lateral correlation pyramid structure for the encoder to keep as many points as possible by integrating hierarchical correlated features.
1 code implementation • 21 Oct 2022 • Lu Sang, Bjoern Haefner, Xingxing Zuo, Daniel Cremers
Fine-detailed reconstructions are in high demand in many applications.
no code implementations • ICCV 2021 • Peidong Liu, Xingxing Zuo, Viktor Larsson, Marc Pollefeys
Motion blur is one of the major challenges remaining for visual odometry methods.
no code implementations • 18 Dec 2020 • Xingxing Zuo, Nathaniel Merrill, Wei Li, Yong liu, Marc Pollefeys, Guoquan Huang
In this work, we present a lightweight, tightly-coupled deep depth network and visual-inertial odometry (VIO) system, which can provide accurate state estimates and dense depth maps of the immediate surroundings.
no code implementations • 17 Aug 2020 • Xingxing Zuo, Yulin Yang, Patrick Geneva, Jiajun Lv, Yong liu, Guoquan Huang, Marc Pollefeys
Only the tracked planar points belonging to the same plane will be used for plane initialization, which makes the plane extraction efficient and robust.
Robotics
2 code implementations • 29 Jul 2020 • Jiajun Lv, Jinhong Xu, Kewei Hu, Yong liu, Xingxing Zuo
Sensor calibration is the fundamental block for a multi-sensor fusion system.
Robotics
no code implementations • 13 Nov 2019 • Xingxing Zuo, Mingming Zhang, Yiming Chen, Yong liu, Guoquan Huang, Mingyang Li
While visual localization or SLAM has witnessed great progress in past decades, when deploying it on a mobile robot in practice, few works have explicitly considered the kinematic (or dynamic) constraints of the real robotic system when designing state estimators.
no code implementations • 9 Sep 2019 • Xingxing Zuo, Patrick Geneva, Woosik Lee, Yong liu, Guoquan Huang
This paper presents a tightly-coupled multi-sensor fusion algorithm termed LiDAR-inertial-camera fusion (LIC-Fusion), which efficiently fuses IMU measurements, sparse visual features, and extracted LiDAR points.
Robotics
no code implementations • 8 Sep 2019 • Mingming Zhang, Xingxing Zuo, Yiming Chen, Yong liu, Mingyang Li
In this paper, we focus on motion estimation dedicated for non-holonomic ground robots, by probabilistically fusing measurements from the wheel odometer and exteroceptive sensors.
no code implementations • 23 Nov 2017 • Xingxing Zuo, Xiaojia Xie, Yong liu, Guoquan Huang
In this paper, we develop a robust efficient visual SLAM system that utilizes heterogeneous point and line features.