3D Generation

62 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find 3D Generation models and implementations

Most implemented papers

Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following

ziyuguo99/point-bind_point-llm 1 Sep 2023

We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video.

LION: Latent Point Diffusion Models for 3D Shape Generation

nv-tlabs/LION 12 Oct 2022

To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes.

Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures

eladrich/latent-nerf CVPR 2023

This unique combination of text and shape guidance allows for increased control over the generation process.

ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation

threestudio-project/threestudio NeurIPS 2023

In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).

StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation

icoz69/styleavatar3d 30 May 2023

The recent advancements in image-text diffusion models have stimulated research interest in large-scale 3D generative models.

VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation

qizekun/vpp NeurIPS 2023

VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects.

MVDream: Multi-view Diffusion for 3D Generation

bytedance/mvdream 31 Aug 2023

We introduce MVDream, a diffusion model that is able to generate consistent multi-view images from a given text prompt.

SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

liuyuan-pal/syncdreamer 7 Sep 2023

In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.

GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation

baaivision/GeoDream 29 Nov 2023

We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors.

V3D: Video Diffusion Models are Effective 3D Generators

heheyas/v3d 11 Mar 2024

To fully unleash the potential of video diffusion to perceive the 3D world, we further introduce geometrical consistency prior and extend the video diffusion model to a multi-view consistent 3D generator.