Texture Synthesis
71 papers with code • 0 benchmarks • 3 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: Non-Stationary Texture Synthesis by Adversarial Expansion
Benchmarks
These leaderboards are used to track progress in Texture Synthesis
Most implemented papers
Pretraining is All You Need for Image-to-Image Translation
We propose to use pretraining to boost general image-to-image translation.
A note on the evaluation of generative models
In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
Awesome Typography: Statistics-Based Text Effects Transfer
It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example.
Improved Texture Networks: Maximizing Quality and Diversity in Feed-forward Stylization and Texture Synthesis
The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems.
Towards Metamerism via Foveated Style Transfer
The problem of $\textit{visual metamerism}$ is defined as finding a family of perceptually indistinguishable, yet physically different images.
Two-Stream Convolutional Networks for Dynamic Texture Synthesis
Given an input dynamic texture, statistics of filter responses from the object recognition ConvNet encapsulate the per-frame appearance of the input texture, while statistics of filter responses from the optical flow ConvNet model its dynamics.
Texture Synthesis with Recurrent Variational Auto-Encoder
A novel loss function, FLTBNK, is used for training the texture synthesizer.
Non-Stationary Texture Synthesis by Adversarial Expansion
We demonstrate that this conceptually simple approach is highly effective for capturing large-scale structures, as well as other non-stationary attributes of the input exemplar.
FrankenGAN: Guided Detail Synthesis for Building Mass-Models Using Style-Synchronized GANs
The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods.
TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures
We tackle the problem of texture synthesis in the setting where many input images are given and a large-scale output is required.