3D Shape Generation
43 papers with code • 0 benchmarks • 1 datasets
Image: Mo et al
Benchmarks
These leaderboards are used to track progress in 3D Shape Generation
Latest papers
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
Arguably, this architecture limits the expressive power of generative models and results in low-quality INR generation.
ShapeGPT: 3D Shape Generation with A Unified Multi-modal Language Model
The advent of large language models, enabling flexibility through instruction-driven approaches, has revolutionized many traditional generative tasks, but large models for 3D data, particularly in comprehensively handling 3D shapes with other modalities, are still under-explored.
EXIM: A Hybrid Explicit-Implicit Representation for Text-Guided 3D Shape Generation
This paper presents a new text-guided technique for generating 3D shapes.
ASUR3D: Arbitrary Scale Upsampling and Refinement of 3D Point Clouds using Local Occupancy Fields
Our proposed implicit occupancy representation enables efficient point classification, effectively discerning points belonging to the surface from non-surface points.
SLiMe: Segment Like Me
Then, using the extracted attention maps, the text embeddings of Stable Diffusion are optimized such that, each of them, learn about a single segmented region from the training image.
3D Semantic Subspace Traverser: Empowering 3D Generative Model with Shape Editing Capability
Our method utilizes implicit functions as the 3D shape representation and combines a novel latent-space GAN with a linear subspace model to discover semantic dimensions in the local latent space of 3D shapes.
DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation
Recent Diffusion Transformers (e. g., DiT) have demonstrated their powerful effectiveness in generating high-quality 2D images.
Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation
We present a novel alignment-before-generation approach to tackle the challenging task of generating general 3D shapes based on 2D images or texts.
3D VR Sketch Guided 3D Shape Prototyping and Exploration
3D shape modeling is labor-intensive, time-consuming, and requires years of expertise.
DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation
The core of our approach is a two-stage feature-space alignment strategy that leverages a pre-trained single-view reconstruction (SVR) model to map CLIP features to shapes: to begin with, map the CLIP image feature to the detail-rich 3D shape space of the SVR model, then map the CLIP text feature to the 3D shape space through encouraging the CLIP-consistency between rendered images and the input text.