Texture Synthesis

72 papers with code • 0 benchmarks • 3 datasets

The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.

Source: Non-Stationary Texture Synthesis by Adversarial Expansion

Latest papers with no code

Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors

no code yet • 7 Dec 2023

Recently, researchers have attempted to improve the genuineness of 3D objects by directly training on 3D datasets, albeit at the cost of low-quality texture generation due to the limited texture diversity in 3D datasets.

Text-Guided 3D Face Synthesis -- From Generation to Editing

no code yet • 1 Dec 2023

In the editing stage, we first employ a pre-trained diffusion model to update facial geometry or texture based on the texts.

SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion Priors

no code yet • 28 Nov 2023

We propose SceneTex, a novel method for effectively generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion priors.

Dual Pipeline Style Transfer with Input Distribution Differentiation

no code yet • 9 Nov 2023

The color and texture dual pipeline architecture (CTDP) suppresses texture representation and artifacts through masked total variation loss (Mtv), and further experiments have shown that smooth input can almost completely eliminate texture representation.

Mesh Neural Cellular Automata

no code yet • 6 Nov 2023

We propose Mesh Neural Cellular Automata (MeshNCA), a method for directly synthesizing dynamic textures on 3D meshes without requiring any UV maps.

Text-to-3D with Classifier Score Distillation

no code yet • 30 Oct 2023

In this paper, we re-evaluate the role of classifier-free guidance in score distillation and discover a surprising finding: the guidance alone is enough for effective text-to-3D generation tasks.

TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models

no code yet • ICCV 2023

We present TexFusion (Texture Diffusion), a new method to synthesize textures for given 3D geometries, using large-scale text-guided image diffusion models.

DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation

no code yet • 19 Oct 2023

To ensure meaningful and aligned textures to the scene, we develop a novel coarse-to-fine panoramic texture generation approach with dual texture alignment, which both considers the geometry and texture cues of the captured scenes.

Does resistance to style-transfer equal Global Shape Bias? Measuring network sensitivity to global shape configuration

no code yet • 11 Oct 2023

The current benchmark for evaluating a model's global shape bias is a set of style-transferred images with the assumption that resistance to the attack of style transfer is related to the development of global structure sensitivity in the model.

Wasserstein Distortion: Unifying Fidelity and Realism

no code yet • 5 Oct 2023

We introduce a distortion measure for images, Wasserstein distortion, that simultaneously generalizes pixel-level fidelity on the one hand and realism or perceptual quality on the other.