1 code implementation • 15 Aug 2020 • Igor Vasiljevic, Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Wolfram Burgard, Greg Shakhnarovich, Adrien Gaidon
Self-supervised learning has emerged as a powerful tool for depth and ego-motion estimation, leading to state-of-the-art results on benchmark datasets.
no code implementations • 3 Aug 2020 • Kuan-Hui Lee, Matthew Kliemann, Adrien Gaidon, Jie Li, Chao Fang, Sudeep Pillai, Wolfram Burgard
In autonomous driving, accurately estimating the state of surrounding obstacles is critical for safe and robust path planning.
2 code implementations • ICLR 2020 • Jiexiong Tang, Hanme Kim, Vitor Guizilini, Sudeep Pillai, Rares Ambrus
By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching.
1 code implementation • 7 Dec 2019 • Jiexiong Tang, Rares Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim, Patric Jensfelt, Adrien Gaidon
Detecting and matching robust viewpoint-invariant keypoints is critical for visual SLAM and Structure-from-Motion.
no code implementations • 4 Oct 2019 • Vitor Guizilini, Jie Li, Rares Ambrus, Sudeep Pillai, Adrien Gaidon
Dense depth estimation from a single image is a key problem in computer vision, with exciting applications in a multitude of robotic tasks.
no code implementations • 4 Oct 2019 • Rares Ambrus, Vitor Guizilini, Jie Li, Sudeep Pillai, Adrien Gaidon
Learning depth and camera ego-motion from raw unlabeled RGB video streams is seeing exciting progress through self-supervision from strong geometric cues.
no code implementations • 11 May 2019 • Sudeep Pillai, John Leonard
Place recognition is a critical component in robot navigation that enables it to re-establish previously visited locations, and simultaneously use this information to correct the drift incurred in its dead-reckoned estimate.
4 code implementations • CVPR 2020 • Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Allan Raventos, Adrien Gaidon
Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception.
no code implementations • 3 Oct 2018 • Sudeep Pillai, Rares Ambrus, Adrien Gaidon
Both contributions provide significant performance gains over the state-of-the-art in self-supervised depth and pose estimation on the public KITTI benchmark.
no code implementations • 29 May 2017 • Sudeep Pillai, John J. Leonard
Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed.
no code implementations • 3 Nov 2015 • Sudeep Pillai, Srikumar Ramalingam, John J. Leonard
Traditional stereo algorithms have focused their efforts on reconstruction quality and have largely avoided prioritizing for run time performance.
no code implementations • 4 Jun 2015 • Sudeep Pillai, John Leonard
In this work, we develop a monocular SLAM-aware object recognition system that is able to achieve considerably stronger recognition performance, as compared to classical object recognition systems that function on a frame-by-frame basis.
no code implementations • CVPR 2015 • Srikumar Ramalingam, Michel Antunes, Dan Snow, Gim Hee Lee, Sudeep Pillai
We propose a simple and useful idea based on cross-ratio constraint for wide-baseline matching and 3D reconstruction.
1 code implementation • 5 Feb 2015 • Sudeep Pillai, Matthew R. Walter, Seth Teller
This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion.