Learning to Predict 3D Lane Shape and Camera Pose from a Single Image via Geometry Constraints

31 Dec 2021  ·  Ruijin Liu, Dapeng Chen, Tie Liu, Zhiliang Xiong, Zejian yuan ·

Detecting 3D lanes from the camera is a rising problem for autonomous vehicles. In this task, the correct camera pose is the key to generating accurate lanes, which can transform an image from perspective-view to the top-view. With this transformation, we can get rid of the perspective effects so that 3D lanes would look similar and can accurately be fitted by low-order polynomials. However, mainstream 3D lane detectors rely on perfect camera poses provided by other sensors, which is expensive and encounters multi-sensor calibration issues. To overcome this problem, we propose to predict 3D lanes by estimating camera pose from a single image with a two-stage framework. The first stage aims at the camera pose task from perspective-view images. To improve pose estimation, we introduce an auxiliary 3D lane task and geometry constraints to benefit from multi-task learning, which enhances consistencies between 3D and 2D, as well as compatibility in the above two tasks. The second stage targets the 3D lane task. It uses previously estimated pose to generate top-view images containing distance-invariant lane appearances for predicting accurate 3D lanes. Experiments demonstrate that, without ground truth camera pose, our method outperforms the state-of-the-art perfect-camera-pose-based methods and has the fewest parameters and computations. Codes are available at https://github.com/liuruijin17/CLGo.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Lane Detection Apollo Synthetic 3D Lane CLGo F1 91.9 # 6
X error near 0.061 # 6
X error far 0.361 # 6
Z error near 0.029 # 9
Z error far 0.25 # 9

Methods


No methods listed for this paper. Add relevant methods here