Texture Synthesis
71 papers with code • 0 benchmarks • 3 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: Non-Stationary Texture Synthesis by Adversarial Expansion
Benchmarks
These leaderboards are used to track progress in Texture Synthesis
Latest papers with no code
Enhancing Texture Generation with High-Fidelity Using Advanced Texture Priors
Moreover, background noise frequently arises in high-resolution texture synthesis, limiting the practical application of these generation technologies. In this work, we propose a high-resolution and high-fidelity texture restoration technique that uses the rough texture as the initial input to enhance the consistency between the synthetic texture and the initial texture, thereby overcoming the issues of aliasing and blurring caused by the user's structure simplification operations.
3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary Mesh Topology
Learning to generate textures for a novel 3D mesh given a collection of 3D meshes and real-world 2D images is an important problem with applications in various domains such as 3D simulation, augmented and virtual reality, gaming, architecture, and design.
DragTex: Generative Point-Based Texture Editing on 3D Mesh
Creating 3D textured meshes using generative artificial intelligence has garnered significant attention recently.
Minecraft-ify: Minecraft Style Image Generation with Text-guided Image Editing for In-Game Application
In this paper, we first present the character texture generation system \textit{Minecraft-ify}, specified to Minecraft video game toward in-game application.
CTGAN: Semantic-guided Conditional Texture Generator for 3D Shapes
The entertainment industry relies on 3D visual content to create immersive experiences, but traditional methods for creating textured 3D models can be time-consuming and subjective.
DressCode: Autoregressively Sewing and Generating Garments from Text Guidance
For our framework, we first introduce SewingGPT, a GPT-based architecture integrating cross-attention with text-conditioned embedding to generate sewing patterns with text guidance.
TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion
In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.
Exploring 3D-aware Lifespan Face Aging via Disentangled Shape-Texture Representations
Existing face aging methods often focus on modeling either texture aging or using an entangled shape-texture representation to achieve face aging.
Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering
We present Paint-it, a text-driven high-fidelity texture map synthesis method for 3D meshes via neural re-parameterized texture optimization.
Single Mesh Diffusion Models with Field Latents for Texture Generation
We introduce a framework for intrinsic latent diffusion models operating directly on the surfaces of 3D shapes, with the goal of synthesizing high-quality textures.