We present a novel method for simultaneous learning of depth, egomotion,
object motion, and camera intrinsics from monocular videos, using only
consistency across neighboring video frames as supervision signal. Similarly to
prior work, our method learns by applying differentiable warping to frames and
comparing the result to adjacent ones, but it provides several improvements: We
address occlusions geometrically and differentiably, directly using the depth
maps as predicted during training...
We introduce randomized layer normalization,
a novel powerful regularizer, and we account for object motion relative to the
scene. To the best of our knowledge, our work is the first to learn the camera
intrinsic parameters, including lens distortion, from video in an unsupervised
manner, thereby allowing us to extract accurate depth and motion from arbitrary
videos of unknown origin at scale. We evaluate our results on the Cityscapes,
KITTI and EuRoC datasets, establishing new state of the art on depth prediction
and odometry, and demonstrate qualitatively that depth prediction can be
learned from a collection of YouTube videos.