Texture Synthesis
71 papers with code • 0 benchmarks • 3 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: Non-Stationary Texture Synthesis by Adversarial Expansion
Benchmarks
These leaderboards are used to track progress in Texture Synthesis
Latest papers
Neural Texture Synthesis With Guided Correspondence
More importantly, the Guided Correspondence loss can function as a general textural loss in, e. g., training generative networks for real-time controlled synthesis and inversion-based single-image editing.
ClipFace: Text-guided Editing of Textured 3D Morphable Models
Controllable editing and manipulation are given by language prompts to adapt texture and expression of the 3D morphable model.
Long Range Constraints for Neural Texture Synthesis Using Sliced Wasserstein Loss
In the past decade, exemplar-based texture synthesis algorithms have seen strong gains in performance by matching statistics of deep convolutional neural networks.
A Structure-Guided Diffusion Model for Large-Hole Image Completion
The structure generator generates an edge image representing plausible structures within the holes, which is then used for guiding the texture generation process.
DeepDC: Deep Distance Correlation as a Perceptual Image Quality Evaluator
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand
We claim that the performance of inpainting algorithms can be better judged by the generated structures and textures.
Texture Generation Using A Graph Generative Adversarial Network And Differentiable Rendering
Novel photo-realistic texture synthesis is an important task for generating novel scenes, including asset generation for 3D simulations.
Pretraining is All You Need for Image-to-Image Translation
We propose to use pretraining to boost general image-to-image translation.
AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
Our key insight is to take advantage of the powerful vision-language model CLIP for supervising neural human generation, in terms of 3D geometry, texture and animation.
Generalized Rectifier Wavelet Covariance Models For Texture Synthesis
State-of-the-art maximum entropy models for texture synthesis are built from statistics relying on image representations defined by convolutional neural networks (CNN).