Search Results for author: Jeong Joon Park

Found 18 papers, 5 papers with code

4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling

no code implementations29 Nov 2023 Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, David B. Lindell

Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to-video models to generate dynamic 3D scenes.

CurveCloudNet: Processing Point Clouds with 1D Structure

no code implementations21 Mar 2023 Colton Stearns, Davis Rempe, Jiateng Liu, Alex Fu, Sebastien Mascha, Jeong Joon Park, Despoina Paschalidou, Leonidas J. Guibas

Modern depth sensors such as LiDAR operate by sweeping laser-beams across the scene, resulting in a point cloud with notable 1D curve-like structures.

CC3D: Layout-Conditioned Generation of Compositional 3D Scenes

no code implementations ICCV 2023 Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, Andrea Tagliasacchi

In this work, we introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts, trained using single-view images.

Inductive Bias

PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision

no code implementations16 Mar 2023 Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, Leonidas Guibas

Evaluations on various ShapeNet categories demonstrate the ability of our model to generate editable 3D objects of improved fidelity, compared to previous part-based generative approaches that require 3D supervision or models relying on NeRFs.

Generating Part-Aware Editable 3D Shapes Without 3D Supervision

1 code implementation CVPR 2023 Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, Leonidas Guibas

Evaluations on various ShapeNet categories demonstrate the ability of our model to generate editable 3D objects of improved fidelity, compared to previous part-based generative approaches that require 3D supervision or models relying on NeRFs.

SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene

no code implementations CVPR 2023 Minjung Son, Jeong Joon Park, Leonidas Guibas, Gordon Wetzstein

Generative models have shown great promise in synthesizing photorealistic 3D objects, but they require large amounts of training data.

3D-Aware Video Generation

1 code implementation29 Jun 2022 Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc van Gool, Radu Timofte

Generative models have emerged as an essential building block for many image synthesis and editing tasks.

Image Generation Video Generation

BACON: Band-limited Coordinate Networks for Multiscale Scene Representation

1 code implementation CVPR 2022 David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein

These networks are trained to map continuous input coordinates to the value of a signal at each point.

Seeing the World in a Bag of Chips

no code implementations CVPR 2020 Jeong Joon Park, Aleksander Holynski, Steve Seitz

We address the dual problems of novel view synthesis and environment reconstruction from hand-held RGBD sensors.

Novel View Synthesis

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

4 code implementations CVPR 2019 Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove

In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

3D Reconstruction 3D Shape Representation

Surface Light Field Fusion

no code implementations6 Sep 2018 Jeong Joon Park, Richard Newcombe, Steve Seitz

We present an approach for interactively scanning highly reflective objects with a commodity RGBD sensor.

Prevalence and recoverability of syntactic parameters in sparse distributed memories

no code implementations21 Oct 2015 Jeong Joon Park, Ronnel Boettcher, Andrew Zhao, Alex Mun, Kevin Yuh, Vibhor Kumar, Matilde Marcolli

We propose a new method, based on Sparse Distributed Memory (Kanerva Networks), for studying dependency relations between different syntactic parameters in the Principles and Parameters model of Syntax.

Relation

Cannot find the paper you are looking for? You can Submit a new open access paper.