Auto4D: Learning to Label 4D Objects from Sequential Point Clouds

17 Jan 2021  ·  Bin Yang, Min Bai, Ming Liang, Wenyuan Zeng, Raquel Urtasun ·

In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts of high-quality labels to achieve good performance, which often require time-consuming and expensive work by human annotators. To address this we propose an automatic annotation pipeline that generates accurate object trajectories in 3D space (i.e., 4D labels) from LiDAR point clouds. The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time. Instead of generating a series of labels in one shot, we adopt an iterative refinement process where online generated object detections are tracked through time as the initialization. Given the cheap but noisy input, our model produces higher quality 4D labels by re-estimating the object size and smoothing the motion path, where the improvement is achieved by exploiting aggregated observations and motion cues over the entire trajectory. We validate the proposed method on a large-scale driving dataset and show a 25% reduction of human annotation efforts. We also showcase the benefits of our approach in the annotator-in-the-loop setting.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here