Autonomous driving is the task of driving a vehicle without human conduction.
( Image credit: Exploring the Limitations of Behavior Cloning for Autonomous Driving )
A car driver knows how to react on the gestures of the traffic officers.
The problem of tracking self-motion as well as motion of objects in the scene using information from a camera is known as multi-body visual odometry and is a challenging task.
The Polylidar3D front-end transforms input data into a half-edge triangular mesh.
In this paper, we propose a novel lane-sensitive architecture search framework named CurveLane-NAS to automatically capture both long-ranged coherent and accurate short-range curve information while unifying both architecture search and post-processing on curve lane predictions via point blending.
We present a simple and flexible object detection framework optimized for autonomous driving.
To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud.
For instance, on LeNet-5 (with $100\%$ and $99. 49\%$ accuracies on the training and test sets), WJSMA and TJSMA respectively exceed $97\%$ and $98. 60\%$ in success rate for a maximum authorised distortion of $14. 5\%$, outperforming JSMA with more than $9. 5$ and $11$ percentage points.
To address driving in near-accident scenarios, we propose a hierarchical reinforcement and imitation learning (H-ReIL) approach that consists of low-level policies learned by IL for discrete driving modes, and a high-level policy learned by RL that switches between different driving modes.
Further, with the use of high-fidelity driving simulators and real-world datasets, we demonstrate how parameters of 2D and 3D occupancy maps can be automatically adapted to accord with local spatial changes.