3D Shape Generation
43 papers with code • 0 benchmarks • 1 datasets
Image: Mo et al
Benchmarks
These leaderboards are used to track progress in 3D Shape Generation
Latest papers
3DQD: Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process
We develop a generalized 3D shape generation prior model, tailored for multiple 3D tasks including unconditional shape generation, point cloud completion, and cross-modality shape generation, etc.
3D Shape Temporal Aggregation for Video-Based Clothing-Change Person Re-Identication
However, existing Re-ID methods usually generate 3D body shapes without considering identity modeling, which severely weakens the discriminability of 3D human shapes.
Generalized Deep 3D Shape Prior via Part-Discretized Diffusion Process
We develop a generalized 3D shape generation prior model, tailored for multiple 3D tasks including unconditional shape generation, point cloud completion, and cross-modality shape generation, etc.
Adaptive Spiral Layers for Efficient 3D Representation Learning on Meshes
The success of deep learning models on structured data has generated significant interest in extending their application to non-Euclidean domains.
SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation
To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, including images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input.
TetraDiffusion: Tetrahedral Diffusion Models for 3D Shape Generation
Probabilistic denoising diffusion models (DDMs) have set a new standard for 2D image generation.
Deep Generative Models on 3D Representations: A Survey
In this survey, we thoroughly review the ongoing developments of 3D generative models, including methods that employ 2D and 3D supervision.
LION: Latent Point Diffusion Models for 3D Shape Generation
To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes.
Neural Wavelet-domain Diffusion for 3D Shape Generation
This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.
ISS: Image as Stepping Stone for Text-Guided 3D Shape Generation
Text-guided 3D shape generation remains challenging due to the absence of large paired text-shape data, the substantial semantic gap between these two modalities, and the structural complexity of 3D shapes.