Search Results for author: Haoyi Niu

Found 12 papers, 6 papers with code

Multi-Objective Trajectory Planning with Dual-Encoder

no code implementations26 Mar 2024 Beibei Zhang, Tian Xiang, Chentao Mao, Yuhua Zheng, Shuai Li, Haoyi Niu, Xiangming Xi, Wenyuan Bai, Feng Gao

In this paper, we propose a two-stage approach to accelerate time-jerk optimal trajectory planning.

Trajectory Planning

DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning

no code implementations28 Feb 2024 Jianxiong Li, Jinliang Zheng, Yinan Zheng, Liyuan Mao, Xiao Hu, Sijie Cheng, Haoyi Niu, Jihao Liu, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Xianyuan Zhan

Multimodal pretraining has emerged as an effective strategy for the trinity of goals of representation learning in autonomous robots: 1) extracting both local and global task progression information; 2) enforcing temporal consistency of visual representation; 3) capturing trajectory-level language grounding.

Contrastive Learning Decision Making +1

A Comprehensive Survey of Cross-Domain Policy Transfer for Embodied Agents

1 code implementation7 Feb 2024 Haoyi Niu, Jianming Hu, Guyue Zhou, Xianyuan Zhan

Consequently, researchers often resort to data from easily accessible source domains, such as simulation and laboratory environments, for cost-effective data acquisition and rapid model iteration.

Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving

1 code implementation25 Sep 2023 Haoyi Niu, Qimao Chen, Yingyue Li, Yi Zhang, Jianming Hu

The deployment of autonomous vehicles (AVs) has faced hurdles due to the dominance of rare but critical corner cases within the long-tail distribution of driving scenarios, which negatively affects their overall performance.

Autonomous Driving

Continual Driving Policy Optimization with Closed-Loop Individualized Curricula

1 code implementation25 Sep 2023 Haoyi Niu, Yizhou Xu, Xingjian Jiang, Jianming Hu

To tackle this challenge, a surge of research in scenario-based autonomous driving has emerged, with a focus on generating high-risk driving scenarios and applying them to conduct safety-critical testing of AV models.

Autonomous Driving

H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps

no code implementations22 Sep 2023 Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan

Solving real-world complex tasks using reinforcement learning (RL) without high-fidelity simulation environments or large amounts of offline data can be quite challenging.

Offline RL Reinforcement Learning (RL)

Discriminator-Guided Model-Based Offline Imitation Learning

no code implementations1 Jul 2022 Wenjia Zhang, Haoran Xu, Haoyi Niu, Peng Cheng, Ming Li, Heming Zhang, Guyue Zhou, Xianyuan Zhan

In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and suboptimality of model rollout data against real expert demonstrations.

Imitation Learning

When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning

1 code implementation27 Jun 2022 Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan

This brings up a new question: is it possible to combine learning from limited real data in offline RL and unrestricted exploration through imperfect simulators in online RL to address the drawbacks of both approaches?

Offline RL reinforcement-learning +1

DR2L: Surfacing Corner Cases to Robustify Autonomous Driving via Domain Randomization Reinforcement Learning

no code implementations25 Jul 2021 Haoyi Niu, Jianming Hu, Zheyu Cui, Yi Zhang

How to explore corner cases as efficiently and thoroughly as possible has long been one of the top concerns in the context of deep reinforcement learning (DeepRL) autonomous driving.

Autonomous Driving reinforcement-learning +1

Tactical Decision Making for Emergency Vehicles Based on A Combinational Learning Method

no code implementations9 Sep 2020 Haoyi Niu, Jianming Hu, Zheyu Cui, Yi Zhang

The following approach reveals that DRL could complement rule-based avoiding strategy in generalization, and on the contrary, the rule-based avoiding strategy could complement DRL in stability, and their combination could lead to less response time, lower collision rate and smoother trajectory.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.