SimAug: Learning Robust Representations from 3D Simulation for Pedestrian Trajectory Prediction in Unseen Cameras

4 Apr 2020  ·  Junwei Liang, Lu Jiang, Alexander Hauptmann ·

This paper focuses on the problem of predicting future trajectories of people in unseen scenarios and camera views. We propose a method to efficiently utilize multi-view 3D simulation data for training. Our approach finds the hardest camera view to mix up with adversarial data from the original camera view in training, thus enabling the model to learn robust representations that can generalize to unseen camera views. We refer to our method as SimAug. We show that SimAug achieves best results on three out-of-domain real-world benchmarks, as well as getting state-of-the-art in the Stanford Drone and the VIRAT/ActEV dataset with in-domain training data. We will release our models and code.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Trajectory Prediction ActEV SimAug ADE-8/12 17.96 # 2
FDE-8/12 34.68 # 2
Trajectory Prediction Stanford Drone SimAug ADE-8/12 @K = 20 10.27 # 8
FDE-8/12 @K= 20 19.71 # 8

Methods