Search Results for author: Yanbing Mao

Found 4 papers, 0 papers with code

Physical Deep Reinforcement Learning: Safety and Unknown Unknowns

no code implementations26 May 2023 Hongpeng Cao, Yanbing Mao, Lui Sha, Marco Caccamo

In this paper, we propose the Phy-DRL: a physics-model-regulated deep reinforcement learning framework for safety-critical autonomous systems.

reinforcement-learning

Physical Deep Reinforcement Learning Towards Safety Guarantee

no code implementations29 Mar 2023 Hongpeng Cao, Yanbing Mao, Lui Sha, Marco Caccamo

Deep reinforcement learning (DRL) has achieved tremendous success in many complex decision-making tasks of autonomous systems with high-dimensional state and/or action spaces.

Decision Making reinforcement-learning

Phy-Taylor: Physics-Model-Based Deep Neural Networks

no code implementations27 Sep 2022 Yanbing Mao, Lui Sha, Huajie Shao, Yuliang Gu, Qixin Wang, Tarek Abdelzaher

To do so, the PhN augments neural network layers with two key components: (i) monomials of Taylor series expansion of nonlinear functions capturing physical knowledge, and (ii) a suppressor for mitigating the influence of noise.

SL1-Simplex: Safe Velocity Regulation of Self-Driving Vehicles in Dynamic and Unforeseen Environments

no code implementations4 Aug 2020 Yanbing Mao, Yuliang Gu, Naira Hovakimyan, Lui Sha, Petros Voulgaris

Due to the high dependence of vehicle dynamics on the driving environments, the proposed Simplex leverages the finite-time model learning to timely learn and update the vehicle model for $\mathcal{L}_{1}$ adaptive controller, when any deviation from the safety envelope or the uncertainty measurement threshold occurs in the unforeseen driving environments.

Autonomous Vehicles

Cannot find the paper you are looking for? You can Submit a new open access paper.