Revealing Disocclusions in Temporal View Synthesis through Infilling Vector Prediction

17 Oct 2021  ·  Vijayalakshmi Kanchana, Nagabhushan Somraj, Suraj Yadwad, Rajiv Soundararajan ·

We consider the problem of temporal view synthesis, where the goal is to predict a future video frame from the past frames using knowledge of the depth and relative camera motion. In contrast to revealing the disoccluded regions through intensity based infilling, we study the idea of an infilling vector to infill by pointing to a non-disoccluded region in the synthesized view. To exploit the structure of disocclusions created by camera motion during their infilling, we rely on two important cues, temporal correlation of infilling directions and depth. We design a learning framework to predict the infilling vector by computing a temporal prior that reflects past infilling directions and a normalized depth map as input to the network. We conduct extensive experiments on a large scale dataset we build for evaluating temporal view synthesis in addition to the SceneNet RGB-D dataset. Our experiments demonstrate that our infilling vector prediction approach achieves superior quantitative and qualitative infilling performance compared to other approaches in literature.

PDF Abstract

Datasets


Introduced in the Paper:

IISc VEED

Used in the Paper:

SceneNet RGB-D
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Temporal View Synthesis IISc VEED IVP MSE 0.47 # 1
D-MSE 442 # 1
SSIM 0.9262 # 1
D-SSIM 0.7729 # 1
Temporal Consistency 126 # 1
Temporal View Synthesis SceneNet RGB-D IVP D-MSE 874 # 1
D-SSIM 0.623 # 1
Temporal Consistency 240 # 1

Methods


No methods listed for this paper. Add relevant methods here