NERDS 360 (NeRF for Reconstruction, Decomposition and Scene Synthesis of 360° outdoor scenes)

Introduced by Irshad et al. in NeO 360: Neural Fields for Sparse View Synthesis of Outdoor Scenes

We present a large-scale dataset for 3D urban scene understanding. Compared to existing datasets, our dataset consists of 75 outdoor urban scenes with diverse backgrounds, encompassing over 15,000 images. These scenes offer 360◦ hemispherical views, capturing diverse foreground objects illuminated under various lighting conditions. Additionally, our dataset encompasses scenes that are not limited to forward-driving views, addressing the limitations of previous datasets such as limited overlap and coverage between camera views. The closest pre-existing dataset for generalizable evaluation is DTU [2] (80 scenes) which comprises mostly indoor objects and does not provide multiple foreground objects or background scenes.

We use the Parallel Domain synthetic data generation to render high-fidelity 360◦ scenes. We select 3 different maps i.e. SF 6thAndMission, SF GrantAndCalifornia and SF VanNessAveAndTurkSt and sample 75 different scenes across all 3 maps as our backgrounds (All 75 scenes across 3 maps are significantly different road scenes from each other, captured at different viewpoints in the city). We select 20 different cars in 50 different textures for training and randomly sample from 1 to 4 cars to render in a scene. We refer to this dataset as NeRDS 360: NeRF for Reconstruction, Decomposition and Scene Synthesis of 360◦ outdoor scenes. In total, we generate 15k renderings by sampling 200 cameras in a hemispherical dome at a fixed distance from the center of cars. We held out 5 scenes with 4 different cars and different backgrounds for testing, comprising 100 cameras distributed uniformly sampled in the upper hemisphere, different from the camera distributions used for training. We use the diverse validation camera distribution to test our approach’s ability to generalize to unseen viewpoints as well as unseen scenes during training. Our dataset and the corresponding task is extremely challenging due to occlusions, diversity of backgrounds, and rendered objects with various lightning and shadows.

Our task entails reconstructing 360◦ hemispherical views of complete scenes using a handful of observations i.e. 1 to 5 as shown by red cameras whereas evaluating using all 100 hemispherical views, hence our task requires strong priors for novel view synthesis of outdoor scenes.

Tasks our dataset support:

  • Generaliazable Novel view synthesis (Few shot evaluation)
  • Novel view synthesis (Overfitting evaluation)
  • 6D pose estimation
  • Object editing
  • Depth estimation
  • Semantic Segmentation
  • Instance Segmentation

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages