Search Results for author: Binbin Huang

Found 8 papers, 4 papers with code

2D Gaussian Splatting for Geometrically Accurate Radiance Fields

no code implementations26 Mar 2024 Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, Shenghua Gao

3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.

Novel View Synthesis

Mip-Splatting: Alias-free 3D Gaussian Splatting

1 code implementation27 Nov 2023 Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger

Recently, 3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency.

Novel View Synthesis

TSP-Transformer: Task-Specific Prompts Boosted Transformer for Holistic Scene Understanding

1 code implementation6 Nov 2023 Shuo Wang, Jing Li, Zibo Zhao, Dongze Lian, Binbin Huang, Xiaomei Wang, Zhengxin Li, Shenghua Gao

Holistic scene understanding includes semantic segmentation, surface normal estimation, object boundary detection, depth estimation, etc.

Boundary Detection Depth Estimation +5

Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction

no code implementations21 Apr 2023 Binbin Huang, Xingyue Peng, Siyuan Shen, Suan Xia, Ruiqian Li, Yanhua Yu, Yuehan Wang, Shenghua Gao, Wenzheng Chen, Shiying Li, Jingyi Yu

The core of our method is to put the object nearby diffuse walls and augment the LOS scan in the front view with the NLOS scans from the surrounding walls, which serve as virtual ``mirrors'' to trap lights toward the object.

Object

3D-aware Image Generation using 2D Diffusion Models

no code implementations ICCV 2023 Jianfeng Xiang, Jiaolong Yang, Binbin Huang, Xin Tong

In this paper, we introduce a novel 3D-aware image generation method that leverages 2D diffusion models.

Image Generation

PREF: Phasorial Embedding Fields for Compact Neural Representations

1 code implementation26 May 2022 Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu

We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.

Look Before You Leap: Learning Landmark Features for One-Stage Visual Grounding

1 code implementation CVPR 2021 Binbin Huang, Dongze Lian, Weixin Luo, Shenghua Gao

Then we combine the contextual information from the landmark feature convolution module with the target's visual features for grounding.

Descriptive Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.