Spiral Generative Network for Image Extrapolation

In this paper, motivated by human natural ability to perceive unseen surroundings imaginatively, we propose a novel Spiral Generative Network, SpiralNet, to perform image extrapolation in a spiral manner, which regards extrapolation as an evolution process growing from an input sub-image along a spiral curve to an expanded full image. Our SpiralNet, consisting of ImagineGAN and SliceGAN, disentangles image extrapolation problem into two independent sub-tasks as semantic structure prediction (via ImagineGAN) and contextual detail generation (via SliceGAN), making the whole task more tractable. The design of SliceGAN implicitly harnesses the correlation between generated contents and extrapolating direction, divide-and-conquer while generation-by-parts. Extensive experiments on datasets covering both objects and scenes under different cases show that our method achieves state-of-the-art performance on image extrapolation. We also conduct ablation study to validate efficacy of our design. Our code is available at https://github.com/zhenglab/spiralnet .

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here