no code implementations • ICCV 2023 • Chandan Yeshwanth, Yueh-Cheng Liu, Matthias Nießner, Angela Dai
Each scene is captured with a high-end laser scanner at sub-millimeter resolution, along with registered 33-megapixel images from a DSLR camera, and RGB-D streams from an iPhone.
1 code implementation • 24 Oct 2022 • Bolivar Solarte, Chin-Hsuan Wu, Yueh-Cheng Liu, Yi-Hsuan Tsai, Min Sun
In addition, since ground truth annotations are not available during training nor in testing, we leverage the entropy information in multiple layout estimations as a quantitative metric to measure the geometry consistency of the scene, allowing us to evaluate any layout estimator for hyper-parameter tuning, including model selection without ground truth annotations.
1 code implementation • 12 Dec 2021 • Bolivar Solarte, Yueh-Cheng Liu, Chin-Hsuan Wu, Yi-Hsuan Tsai, Min Sun
We present 360-DFPE, a sequential floor plan estimation method that directly takes 360-images as input without relying on active sensors or 3D information.
no code implementations • 29 Nov 2021 • Guan-Rong Lu, Yueh-Cheng Liu, Tung-I Chen, Hung-Ting Su, Tsung-Han Wu, Winston H. Hsu
We design a new Masked Gradient Update (MGU) module to generate auxiliary data along the boundary of in-distribution data points.
1 code implementation • ICCV 2021 • Tsung-Han Wu, Yueh-Cheng Liu, Yu-Kai Huang, Hsin-Ying Lee, Hung-Ting Su, Ping-Chia Huang, Winston H. Hsu
Despite the success of deep learning on supervised point cloud semantic segmentation, obtaining large-scale point-by-point manual annotations is still a significant challenge.
no code implementations • CVPR 2021 • Yu-Kai Huang, Yueh-Cheng Liu, Tsung-Han Wu, Hung-Ting Su, Yu-Cheng Chang, Tsung-Lin Tsou, Yu-An Wang, Winston H. Hsu
Dense depth estimation plays a key role in multiple applications such as robotics, 3D reconstruction, and augmented reality.
no code implementations • 10 Apr 2021 • Yueh-Cheng Liu, Yu-Kai Huang, Hung-Yueh Chiang, Hung-Ting Su, Zhe-Yu Liu, Chin-Tang Chen, Ching-Yu Tseng, Winston H. Hsu
Most 3D neural networks are trained from scratch owing to the lack of large-scale labeled 3D datasets.
no code implementations • 3 Mar 2021 • Yu-Kai Huang, Yueh-Cheng Liu, Tsung-Han Wu, Hung-Ting Su, Yu-Cheng Chang, Tsung-Lin Tsou, Yu-An Wang, Winston H. Hsu
Dense depth estimation plays a key role in multiple applications such as robotics, 3D reconstruction, and augmented reality.
1 code implementation • 24 Feb 2021 • Tung-I Chen, Yueh-Cheng Liu, Hung-Ting Su, Yu-Cheng Chang, Yu-Hsiang Lin, Jia-Fong Yeh, Wen-Chin Chen, Winston H. Hsu
While recent progress has significantly boosted few-shot classification (FSC) performance, few-shot object detection (FSOD) remains challenging for modern learning systems.
Ranked #9 on Few-Shot Object Detection on MS-COCO (10-shot)
1 code implementation • 8 Dec 2020 • Chih-Hung Liang, Yu-An Chen, Yueh-Cheng Liu, Winston H. Hsu
Therefore, we built a new dataset containing both RAW images and processed sRGB images and design a new model to utilize the unique characteristics of RAW images.
no code implementations • 21 Oct 2020 • Kuang-Yu Jeng, Yueh-Cheng Liu, Zhe Yu Liu, Jen-Wei Wang, Ya-Liang Chang, Hung-Ting Su, Winston H. Hsu
We proposed an end-to-end grasp detection network, Grasp Detection Network (GDN), cooperated with a novel coarse-to-fine (C2F) grasp representation design to detect diverse and accurate 6-DoF grasps based on point clouds.
no code implementations • 24 Apr 2020 • Yu-Kai Huang, Yueh-Cheng Liu, Tsung-Han Wu, Hung-Ting Su, Winston H. Hsu
The performance of image based stereo estimation suffers from lighting variations, repetitive patterns and homogeneous appearance.
3 code implementations • 22 Aug 2019 • Yu-Kai Huang, Tsung-Han Wu, Yueh-Cheng Liu, Winston H. Hsu
We utilize self-attention mechanism, previously used in image inpainting fields, to extract more useful information in each layer of convolution so that the complete depth map is enhanced.
Ranked #2 on Depth Completion on Matterport3D
1 code implementation • 1 Aug 2019 • Hung-Yueh Chiang, Yen-Liang Lin, Yueh-Cheng Liu, Winston H. Hsu
We present a new unified point-based framework for 3D point cloud segmentation that effectively optimizes pixel-level features, geometrical structures and global context priors of an entire scene.