This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images.
The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems.
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition.
We demonstrate that this conceptually simple approach is highly effective for capturing large-scale structures, as well as other non-stationary attributes of the input exemplar.
This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis.
Given an input dynamic texture, statistics of filter responses from the object recognition ConvNet encapsulate the per-frame appearance of the input texture, while statistics of filter responses from the optical flow ConvNet model its dynamics.
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input.
#2 best model for Image Super-Resolution on FFHQ 256 x 256 - 4x upscaling
Image inpainting techniques have shown significant improvements by using deep neural networks recently.