ZeroFlow: Scalable Scene Flow via Distillation

Scene flow estimation is the task of describing the 3D motion field between temporally successive point clouds. State-of-the-art methods use strong priors and test-time optimization techniques, but require on the order of tens of seconds to process full-size point clouds, making them unusable as computer vision primitives for real-time applications such as open world object detection. Feedforward methods are considerably faster, running on the order of tens to hundreds of milliseconds for full-size point clouds, but require expensive human supervision. To address both limitations, we propose Scene Flow via Distillation, a simple, scalable distillation framework that uses a label-free optimization method to produce pseudo-labels to supervise a feedforward model. Our instantiation of this framework, ZeroFlow, achieves state-of-the-art performance on the Argoverse 2 Self-Supervised Scene Flow Challenge while using zero human labels by simply training on large-scale, diverse unlabeled data. At test-time, ZeroFlow is over 1000x faster than label-free state-of-the-art optimization-based methods on full-size point clouds (34 FPS vs 0.028 FPS) and over 1000x cheaper to train on unlabeled data compared to the cost of human annotation (\$394 vs ~\$750,000). To facilitate further research, we release our code, trained model weights, and high quality pseudo-labels for the Argoverse 2 and Waymo Open datasets at https://vedder.io/zeroflow.html

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-supervised Scene Flow Estimation Argoverse 2 ZeroFlow EPE 3-Way 0.0814 # 3
EPE Foreground Dynamic 0.2109 # 3
EPE Foreground Static 0.0254 # 1
Dynamic IoU 0.4791 # 3
Scene Flow Estimation Argoverse 2 ZeroFlow 1x EPE 3-Way 0.0814 # 4
EPE Foreground Dynamic 0.2109 # 4
EPE Foreground Static 0.0254 # 3
Dynamic IoU 0.4791 # 3

Methods


No methods listed for this paper. Add relevant methods here