Search Results for author: Juan-Ting Lin

Found 6 papers, 3 papers with code

Depth Estimation from Monocular Images and Sparse Radar Data

1 code implementation30 Sep 2020 Juan-Ting Lin, Dengxin Dai, Luc van Gool

We give a comprehensive study of the fusion between RGB images and Radar measurements from different aspects and proposed a working solution based on the observations.

Depth Estimation

Plug-and-Play: Improve Depth Estimation via Sparse Data Propagation

2 code implementations20 Dec 2018 Tsun-Hsuan Wang, Fu-En Wang, Juan-Ting Lin, Yi-Hsuan Tsai, Wei-Chen Chiu, Min Sun

We propose a novel plug-and-play (PnP) module for improving depth prediction with taking arbitrary patterns of sparse depths as input.

Depth Estimation Depth Prediction

Self-Supervised Learning of Depth and Camera Motion from 360° Videos

no code implementations13 Nov 2018 Fu-En Wang, Hou-Ning Hu, Hsien-Tzu Cheng, Juan-Ting Lin, Shang-Ta Yang, Meng-Li Shih, Hung-Kuo Chu, Min Sun

We propose a novel self-supervised learning approach for predicting the omnidirectional depth and camera motion from a 360{\deg} video.

Depth And Camera Motion Depth Prediction +3

Liquid Pouring Monitoring via Rich Sensory Inputs

no code implementations ECCV 2018 Tz-Ying Wu, Juan-Ting Lin, Tsun-Hsuang Wang, Chan-Wei Hu, Juan Carlos Niebles, Min Sun

In the closed-loop system, the ability to monitor the state of the task via rich sensory information is important but often less studied.

Omnidirectional CNN for Visual Place Recognition and Navigation

no code implementations12 Mar 2018 Tsun-Hsuan Wang, Hung-Jui Huang, Juan-Ting Lin, Chan-Wei Hu, Kuo-Hao Zeng, Min Sun

Given a visual input, the task of the O-CNN is not to retrieve the matched place exemplar, but to retrieve the closest place exemplar and estimate the relative distance between the input and the closest place.

Navigate Visual Place Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.