Estimating Image Depth in the Comics Domain

7 Oct 2021  ·  Deblina Bhattacharjee, Martin Everaert, Mathieu Salzmann, Sabine Süsstrunk ·

Estimating the depth of comics images is challenging as such images a) are monocular; b) lack ground-truth depth annotations; c) differ across different artistic styles; d) are sparse and noisy. We thus, use an off-the-shelf unsupervised image to image translation method to translate the comics images to natural ones and then use an attention-guided monocular depth estimator to predict their depth. This lets us leverage the depth annotations of existing natural images to train the depth estimator. Furthermore, our model learns to distinguish between text and images in the comics panels to reduce text-based artefacts in the depth estimates. Our method consistently outperforms the existing state-ofthe-art approaches across all metrics on both the DCM and eBDtheque images. Finally, we introduce a dataset to evaluate depth prediction on comics. Our project website can be accessed at https://github.com/IVRL/ComicsDepth.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Depth Estimation DCM Bhattacharjee et al. Abs Rel 0.251 # 1
Sq Rel 0.318 # 3
RMSE 0.971 # 1
RMSE log 0.305 # 3
Depth Estimation eBDtheque Bhattacharjee et al. Abs Rel 0.376 # 1
Sq Rel 0.448 # 3
RMSE 1.364 # 1
RMSE log 0.553 # 3

Methods


No methods listed for this paper. Add relevant methods here