no code implementations • ICCV 2023 • Lei Lai, Zhongkai Shangguan, Jimuyang Zhang, Eshed Ohn-Bar
We propose XVO, a semi-supervised learning method for training generalized monocular Visual Odometry (VO) models with robust off-the-self operation across diverse datasets and settings.
no code implementations • 21 Sep 2023 • Ruizhao Zhu, Peng Huang, Eshed Ohn-Bar, Venkatesh Saligrama
Human drivers can seamlessly adapt their driving decisions across geographical locations with diverse conditions and rules of the road, e. g., left vs. right-hand traffic.
1 code implementation • CVPR 2023 • Jimuyang Zhang, Zanming Huang, Eshed Ohn-Bar
We propose a novel knowledge distillation framework for effectively teaching a sensorimotor student agent to drive from the supervision of a privileged teacher agent.
Ranked #6 on CARLA longest6 on CARLA
no code implementations • CVPR 2022 • Jimuyang Zhang, Ruizhao Zhu, Eshed Ohn-Bar
However, it is difficult to directly leverage such large amounts of unlabeled and highly diverse data for complex 3D reasoning and planning tasks.
no code implementations • CVPR 2021 • Jimuyang Zhang, Eshed Ohn-Bar
When in a new situation or geographical location, human drivers have an extraordinary ability to watch others and learn maneuvers that they themselves may have never performed.
no code implementations • ICCV 2021 • Jimuyang Zhang, Minglan Zheng, Matthew Boyd, Eshed Ohn-Bar
We tackle inherent data scarcity by leveraging a simulation environment to spawn dynamic agents with various mobility aids.
no code implementations • CVPR 2020 • Eshed Ohn-Bar, Aditya Prakash, Aseem Behl, Kashyap Chitta, Andreas Geiger
Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios.
3 code implementations • 20 May 2020 • Aseem Behl, Kashyap Chitta, Aditya Prakash, Eshed Ohn-Bar, Andreas Geiger
Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.
1 code implementation • 21 Mar 2019 • Aashi Manglik, Xinshuo Weng, Eshed Ohn-Bar, Kris M. Kitani
Our results show that our proposed multi-stream CNN is the best model for predicting time to near-collision.
Robotics
no code implementations • 22 Jun 2018 • Xinlei Pan, Eshed Ohn-Bar, Nicholas Rhinehart, Yan Xu, Yilin Shen, Kris M. Kitani
The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states.
no code implementations • 11 Apr 2018 • Eshed Ohn-Bar, Kris Kitani, Chieko Asakawa
Consider an assistive system that guides visually impaired users through speech and haptic feedback to their destination.
no code implementations • 22 Feb 2018 • Siddharth, Akshay Rangesh, Eshed Ohn-Bar, Mohan M. Trivedi
This work addresses the task of accurately localizing driver hands and classifying the grasp state of each hand.
no code implementations • 6 Jan 2017 • Eshed Ohn-Bar, Mohan M. Trivedi
We aim to study the modeling limitations of the commonly employed boosted decision trees classifier.
Ranked #33 on Face Detection on WIDER Face (Medium)
no code implementations • 14 May 2015 • Eshed Ohn-Bar, M. M. Trivedi
This study aims to analyze the benefits of improved multi-scale reasoning for object detection and localization with deep convolutional neural networks.
no code implementations • 12 Mar 2015 • Eshed Ohn-Bar, Mohan M. Trivedi
This paper studies efficient means for dealing with intra-category diversity in object detection.