Texture Synthesis

71 papers with code • 0 benchmarks • 3 datasets

The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.

Source: Non-Stationary Texture Synthesis by Adversarial Expansion

Latest papers with no code

Enhancing Texture Generation with High-Fidelity Using Advanced Texture Priors

no code yet • 8 Mar 2024

Moreover, background noise frequently arises in high-resolution texture synthesis, limiting the practical application of these generation technologies. In this work, we propose a high-resolution and high-fidelity texture restoration technique that uses the rough texture as the initial input to enhance the consistency between the synthetic texture and the initial texture, thereby overcoming the issues of aliasing and blurring caused by the user's structure simplification operations.

3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary Mesh Topology

no code yet • 7 Mar 2024

Learning to generate textures for a novel 3D mesh given a collection of 3D meshes and real-world 2D images is an important problem with applications in various domains such as 3D simulation, augmented and virtual reality, gaming, architecture, and design.

DragTex: Generative Point-Based Texture Editing on 3D Mesh

no code yet • 4 Mar 2024

Creating 3D textured meshes using generative artificial intelligence has garnered significant attention recently.

Minecraft-ify: Minecraft Style Image Generation with Text-guided Image Editing for In-Game Application

no code yet • 8 Feb 2024

In this paper, we first present the character texture generation system \textit{Minecraft-ify}, specified to Minecraft video game toward in-game application.

CTGAN: Semantic-guided Conditional Texture Generator for 3D Shapes

no code yet • 8 Feb 2024

The entertainment industry relies on 3D visual content to create immersive experiences, but traditional methods for creating textured 3D models can be time-consuming and subjective.

DressCode: Autoregressively Sewing and Generating Garments from Text Guidance

no code yet • 29 Jan 2024

For our framework, we first introduce SewingGPT, a GPT-based architecture integrating cross-attention with text-conditioned embedding to generate sewing patterns with text guidance.

TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion

no code yet • 17 Jan 2024

In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.

Exploring 3D-aware Lifespan Face Aging via Disentangled Shape-Texture Representations

no code yet • 28 Dec 2023

Existing face aging methods often focus on modeling either texture aging or using an entangled shape-texture representation to achieve face aging.

Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering

no code yet • 18 Dec 2023

We present Paint-it, a text-driven high-fidelity texture map synthesis method for 3D meshes via neural re-parameterized texture optimization.

Single Mesh Diffusion Models with Field Latents for Texture Generation

no code yet • 14 Dec 2023

We introduce a framework for intrinsic latent diffusion models operating directly on the surfaces of 3D shapes, with the goal of synthesizing high-quality textures.