3D Generation
62 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in 3D Generation
Libraries
Use these libraries to find 3D Generation models and implementationsMost implemented papers
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following
We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video.
LION: Latent Point Diffusion Models for 3D Shape Generation
To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes.
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
This unique combination of text and shape guidance allows for increased control over the generation process.
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).
StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation
The recent advancements in image-text diffusion models have stimulated research interest in large-scale 3D generative models.
VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation
VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects.
MVDream: Multi-view Diffusion for 3D Generation
We introduce MVDream, a diffusion model that is able to generate consistent multi-view images from a given text prompt.
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.
GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation
We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors.
V3D: Video Diffusion Models are Effective 3D Generators
To fully unleash the potential of video diffusion to perceive the 3D world, we further introduce geometrical consistency prior and extend the video diffusion model to a multi-view consistent 3D generator.