no code implementations • 26 Mar 2024 • Beibei Zhang, Tian Xiang, Chentao Mao, Yuhua Zheng, Shuai Li, Haoyi Niu, Xiangming Xi, Wenyuan Bai, Feng Gao
In this paper, we propose a two-stage approach to accelerate time-jerk optimal trajectory planning.
no code implementations • 28 Feb 2024 • Jianxiong Li, Jinliang Zheng, Yinan Zheng, Liyuan Mao, Xiao Hu, Sijie Cheng, Haoyi Niu, Jihao Liu, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Xianyuan Zhan
Multimodal pretraining has emerged as an effective strategy for the trinity of goals of representation learning in autonomous robots: 1) extracting both local and global task progression information; 2) enforcing temporal consistency of visual representation; 3) capturing trajectory-level language grounding.
1 code implementation • 7 Feb 2024 • Haoyi Niu, Jianming Hu, Guyue Zhou, Xianyuan Zhan
Consequently, researchers often resort to data from easily accessible source domains, such as simulation and laboratory environments, for cost-effective data acquisition and rapid model iteration.
1 code implementation • 25 Sep 2023 • Haoyi Niu, Qimao Chen, Yingyue Li, Yi Zhang, Jianming Hu
The deployment of autonomous vehicles (AVs) has faced hurdles due to the dominance of rare but critical corner cases within the long-tail distribution of driving scenarios, which negatively affects their overall performance.
1 code implementation • 25 Sep 2023 • Haoyi Niu, Yizhou Xu, Xingjian Jiang, Jianming Hu
To tackle this challenge, a surge of research in scenario-based autonomous driving has emerged, with a focus on generating high-risk driving scenarios and applying them to conduct safety-critical testing of AV models.
no code implementations • 22 Sep 2023 • Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan
Solving real-world complex tasks using reinforcement learning (RL) without high-fidelity simulation environments or large amounts of offline data can be quite challenging.
2 code implementations • 27 Feb 2023 • Haoyi Niu, Kun Ren, Yizhou Xu, Ziyuan Yang, Yichen Lin, Yi Zhang, Jianming Hu
Autonomous driving and its widespread adoption have long held tremendous promise.
no code implementations • 1 Jul 2022 • Wenjia Zhang, Haoran Xu, Haoyi Niu, Peng Cheng, Ming Li, Heming Zhang, Guyue Zhou, Xianyuan Zhan
In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and suboptimality of model rollout data against real expert demonstrations.
1 code implementation • 27 Jun 2022 • Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan
This brings up a new question: is it possible to combine learning from limited real data in offline RL and unrestricted exploration through imperfect simulators in online RL to address the drawbacks of both approaches?
2 code implementations • 22 Oct 2021 • Guan Wang, Haoyi Niu, Desheng Zhu, Jianming Hu, Xianyuan Zhan, Guyue Zhou
Heated debates continue over the best autonomous driving framework.
no code implementations • 25 Jul 2021 • Haoyi Niu, Jianming Hu, Zheyu Cui, Yi Zhang
How to explore corner cases as efficiently and thoroughly as possible has long been one of the top concerns in the context of deep reinforcement learning (DeepRL) autonomous driving.
no code implementations • 9 Sep 2020 • Haoyi Niu, Jianming Hu, Zheyu Cui, Yi Zhang
The following approach reveals that DRL could complement rule-based avoiding strategy in generalization, and on the contrary, the rule-based avoiding strategy could complement DRL in stability, and their combination could lead to less response time, lower collision rate and smoother trajectory.