Texture Synthesis
71 papers with code • 0 benchmarks • 3 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: Non-Stationary Texture Synthesis by Adversarial Expansion
Benchmarks
These leaderboards are used to track progress in Texture Synthesis
Most implemented papers
ST-MFNet: A Spatio-Temporal Multi-Flow Network for Frame Interpolation
Video frame interpolation (VFI) is currently a very active research topic, with applications spanning computer vision, post production and video encoding.
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis.
Texture Synthesis Through Convolutional Neural Networks and Spectrum Constraints
This paper presents a significant improvement for the synthesis of texture images using convolutional neural networks (CNNs), making use of constraints on the Fourier spectrum of the results.
Style-Transfer via Texture-Synthesis
Recent work on this problem adopting Convolutional Neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained.
TextureGAN: Controlling Deep Image Synthesis with Texture Patches
In this paper, we investigate deep image synthesis guided by sketch, color, and texture.
High resolution neural texture synthesis with long range constraints
Experiments show the interest of the multi-scale scheme for high resolution textures and the interest of combining it with additional constraints for regular textures.
Conceptual Compression via Deep Structure and Texture Synthesis
To this end, we propose a novel conceptual compression framework that encodes visual data into compact structure and texture representations, then decodes in a deep synthesis fashion, aiming to achieve better visual reconstruction quality, flexible content manipulation, and potential support for various vision tasks.
Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE
We propose a two-stage model for diverse inpainting, where the first stage generates multiple coarse results each of which has a different structure, and the second stage refines each coarse result separately by augmenting texture.
Aggregated Contextual Transformations for High-Resolution Image Inpainting
For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task.
Real-World Blind Super-Resolution via Feature Matching with Implicit High-Resolution Priors
Unlike image-space methods, our FeMaSR restores HR images by matching distorted LR image {\it features} to their distortion-free HR counterparts in our pretrained HR priors, and decoding the matched features to obtain realistic HR images.