Semantic-driven Colorization

13 Jun 2020  ·  Man M. Ho, Lu Zhang, Alexander Raake, Jinjia Zhou ·

Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images. Consequently, the generated color is easier to be overflowed, and the semantic faults are invisible. As a human experience in colorization, our brains first detect and recognize the objects in the photo, then imagine their plausible colors based on many similar objects we have seen in real life, and finally colorize them, as described in the teaser. In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it. Thus, our work can provide plausible colors at a semantic level. Plus, the semantic information of the learned model becomes understandable and able to interact. Additionally, we also prove that Instance Normalization is also a missing ingredient for colorization, then re-design the inference flow of U-Net to have two streams of data, providing an appropriate way of normalizing the feature maps from the black-and-white image and its semantic map. As a result, our network can provide plausible colors competitive to the typical colorization works for specific objects.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods