Autonomous vehicles is the task of making a vehicle that can guide itself without human conduction.
( Image credit: AirSim )
Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process.
Despite the progress on monocular depth estimation in recent years, we show that the gap between monocular and stereo depth accuracy remains large$-$a particularly relevant result due to the prevalent reliance upon monocular cameras by vehicles that are expected to be self-driving.
We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios.
With the advent of autonomous vehicles, LiDAR and cameras have become an indispensable combination of sensors.
Therefore, a detection algorithm that can cope with mislocalizations is required in autonomous driving applications.
LiDAR odometry and mapping (LOAM) has been playing an important role in autonomous vehicles, due to its ability to simultaneously localize the robot's pose and build high-precision, high-resolution maps of the surrounding environment.
We then directly transfer this policy without any tuning to the University of Delaware Scaled Smart City (UDSSC), a 1:25 scale testbed for connected and automated vehicles.
To enable the study of the full diversity of traffic settings, we first propose to decompose traffic control tasks into modules, which may be configured and composed to create new control tasks of interest.