Search Results for author: Haotong Lin

Found 9 papers, 3 papers with code

Street Gaussians for Modeling Dynamic Urban Scenes

no code implementations2 Jan 2024 Yunzhi Yan, Haotong Lin, Chenxu Zhou, Weijie Wang, Haiyang Sun, Kun Zhan, Xianpeng Lang, Xiaowei Zhou, Sida Peng

We introduce Street Gaussians, a new explicit scene representation that tackles all these limitations.

EasyVolcap: Accelerating Neural Volumetric Video Research

1 code implementation11 Dec 2023 Zhen Xu, Tao Xie, Sida Peng, Haotong Lin, Qing Shuai, Zhiyuan Yu, Guangzhao He, Jiaming Sun, Hujun Bao, Xiaowei Zhou

Volumetric video is a technology that digitally records dynamic events such as artistic performances, sporting events, and remote conversations.

4K4D: Real-Time 4D View Synthesis at 4K Resolution

no code implementations17 Oct 2023 Zhen Xu, Sida Peng, Haotong Lin, Guangzhao He, Jiaming Sun, Yujun Shen, Hujun Bao, Xiaowei Zhou

Experiments show that our representation can be rendered at over 400 FPS on the DNA-Rendering dataset at 1080p resolution and 80 FPS on the ENeRF-Outdoor dataset at 4K resolution using an RTX 4090 GPU, which is 30x faster than previous methods and achieves the state-of-the-art rendering quality.

4k

Neural Scene Chronology

1 code implementation CVPR 2023 Haotong Lin, Qianqian Wang, Ruojin Cai, Sida Peng, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely

Specifically, we represent the scene as a space-time radiance field with a per-image illumination embedding, where temporally-varying scene changes are encoded using a set of learned step functions.

Painting 3D Nature in 2D: View Synthesis of Natural Scenes from a Single Semantic Mask

no code implementations CVPR 2023 Shangzhan Zhang, Sida Peng, Tianrun Chen, Linzhan Mou, Haotong Lin, Kaicheng Yu, Yiyi Liao, Xiaowei Zhou

We introduce a novel approach that takes a single semantic mask as input to synthesize multi-view consistent color images of natural scenes, trained with a collection of single images from the Internet.

3D-Aware Image Synthesis

Neural 3D Scene Reconstruction with the Manhattan-world Assumption

1 code implementation CVPR 2022 Haoyu Guo, Sida Peng, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, Xiaowei Zhou

Based on the Manhattan-world assumption, planar constraints are employed to regularize the geometry in floor and wall regions predicted by a 2D semantic segmentation network.

2D Semantic Segmentation 3D Reconstruction +2

Efficient Neural Radiance Fields for Interactive Free-viewpoint Video

no code implementations2 Dec 2021 Haotong Lin, Sida Peng, Zhen Xu, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou

We propose a novel scene representation, called ENeRF, for the fast creation of interactive free-viewpoint videos.

Depth Estimation Depth Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.