no code implementations • 26 Mar 2024 • Sherwin Bahmani, Xian Liu, Yifan Wang, Ivan Skorokhodov, Victor Rong, Ziwei Liu, Xihui Liu, Jeong Joon Park, Sergey Tulyakov, Gordon Wetzstein, Andrea Tagliasacchi, David B. Lindell
We learn local deformations that conform to the global trajectory using supervision from a text-to-video model.
no code implementations • 5 Mar 2024 • Chris Rockwell, Nilesh Kulkarni, Linyi Jin, Jeong Joon Park, Justin Johnson, David F. Fouhey
Estimating relative camera poses between images has been a central problem in computer vision.
no code implementations • 29 Nov 2023 • Sherwin Bahmani, Ivan Skorokhodov, Victor Rong, Gordon Wetzstein, Leonidas Guibas, Peter Wonka, Sergey Tulyakov, Jeong Joon Park, Andrea Tagliasacchi, David B. Lindell
Recent breakthroughs in text-to-4D generation rely on pre-trained text-to-image and text-to-video models to generate dynamic 3D scenes.
no code implementations • ICCV 2023 • Eric R. Chan, Koki Nagano, Matthew A. Chan, Alexander W. Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, Gordon Wetzstein
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
no code implementations • ICCV 2023 • Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, Andrea Tagliasacchi
In this work, we introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts, trained using single-view images.
no code implementations • 21 Mar 2023 • Colton Stearns, Davis Rempe, Jiateng Liu, Alex Fu, Sebastien Mascha, Jeong Joon Park, Despoina Paschalidou, Leonidas J. Guibas
Modern depth sensors such as LiDAR operate by sweeping laser-beams across the scene, resulting in a point cloud with notable 1D curve-like structures.
no code implementations • 16 Mar 2023 • Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, Leonidas Guibas
Evaluations on various ShapeNet categories demonstrate the ability of our model to generate editable 3D objects of improved fidelity, compared to previous part-based generative approaches that require 3D supervision or models relying on NeRFs.
no code implementations • CVPR 2023 • Qiuhong Anna Wei, Sijie Ding, Jeong Joon Park, Rahul Sajnani, Adrien Poulenard, Srinath Sridhar, Leonidas Guibas
Humans universally dislike the task of cleaning up a messy room.
1 code implementation • CVPR 2023 • Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, Leonidas Guibas
Evaluations on various ShapeNet categories demonstrate the ability of our model to generate editable 3D objects of improved fidelity, compared to previous part-based generative approaches that require 3D supervision or models relying on NeRFs.
no code implementations • CVPR 2023 • Zhen Wang, Shijie Zhou, Jeong Joon Park, Despoina Paschalidou, Suya You, Gordon Wetzstein, Leonidas Guibas, Achuta Kadambi
One school of thought is to encode a latent vector for each point (point latents).
no code implementations • CVPR 2023 • Minjung Son, Jeong Joon Park, Leonidas Guibas, Gordon Wetzstein
Generative models have shown great promise in synthesizing photorealistic 3D objects, but they require large amounts of training data.
1 code implementation • 29 Jun 2022 • Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc van Gool, Radu Timofte
Generative models have emerged as an essential building block for many image synthesis and editing tasks.
1 code implementation • CVPR 2022 • Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, Ira Kemelmacher-Shlizerman
We introduce a high resolution, 3D-consistent image and shape generation technique which we call StyleSDF.
1 code implementation • CVPR 2022 • David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein
These networks are trained to map continuous input coordinates to the value of a signal at each point.
no code implementations • CVPR 2020 • Jeong Joon Park, Aleksander Holynski, Steve Seitz
We address the dual problems of novel view synthesis and environment reconstruction from hand-held RGBD sensors.
4 code implementations • CVPR 2019 • Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove
In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.
no code implementations • 6 Sep 2018 • Jeong Joon Park, Richard Newcombe, Steve Seitz
We present an approach for interactively scanning highly reflective objects with a commodity RGBD sensor.
no code implementations • 21 Oct 2015 • Jeong Joon Park, Ronnel Boettcher, Andrew Zhao, Alex Mun, Kevin Yuh, Vibhor Kumar, Matilde Marcolli
We propose a new method, based on Sparse Distributed Memory (Kanerva Networks), for studying dependency relations between different syntactic parameters in the Principles and Parameters model of Syntax.