no code implementations • 1 Apr 2024 • Ling Gao, Daniel Gehrig, Hang Su, Davide Scaramuzza, Laurent Kneip
To recover the full linear camera velocity we fuse observations from multiple lines with a novel velocity averaging scheme that relies on a geometrically-motivated residual, and thus solves the problem more efficiently than previous schemes which minimize an algebraic residual.
no code implementations • ICCV 2023 • Ling Gao, Hang Su, Daniel Gehrig, Marco Cannici, Davide Scaramuzza, Laurent Kneip
Event-based cameras are ideal for line-based motion estimation, since they predominantly respond to edges in the scene.
no code implementations • 4 Jul 2022 • Ling Gao, Yuxuan Liang, Jiaqi Yang, Shaoxun Wu, Chenyu Wang, Jiaben Chen, Laurent Kneip
Event cameras have recently gained in popularity as they hold strong potential to complement regular cameras in situations of high dynamics or challenging illumination.
no code implementations • 10 Jun 2022 • Xin Peng, Ling Gao, Yifu Wang, Laurent Kneip
The practical validity of our approach is demonstrated by a successful application to three different event camera motion estimation problems.
no code implementations • ECCV 2020 • Xin Peng, Yifu Wang, Ling Gao, Laurent Kneip
The practical validity of our approach is supported by a highly successful application to AGV motion estimation with a downward facing event camera, a challenging scenario in which the sensor experiences fronto-parallel motion in front of noisy, fast moving textures.
no code implementations • 1 Mar 2022 • Ling Gao, Laurent Kneip
Our approach relies on robust ceiling and ground plane detection, which solves part of the pose and supports the segmentation of vertical structure elements such as walls and pillars.
no code implementations • 1 Mar 2022 • Ling Gao, Junyan Su, Jiadi Cui, Xiangchen Zeng, Xin Peng, Laurent Kneip
We encountered this difficulty by introducing the first globally-optimal, correspondence-less solution to plane-based Ackermann motion estimation.
1 code implementation • Findings (ACL) 2022 • Rui Cao, Yihao Wang, Yuxin Liang, Ling Gao, Jie Zheng, Jie Ren, Zheng Wang
We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples.
no code implementations • 20 Oct 2021 • Yihao Wang, Ling Gao, Jie Ren, Rui Cao, Hai Wang, Jie Zheng, Quanli Gao
In detail, we train a DNN model (termed as pre-model) to predict which object detection model to use for the coming task and offloads to which edge servers by physical characteristics of the image task (e. g., brightness, saturation).
no code implementations • 7 Jul 2021 • Yifu Wang, Jiaqi Yang, Xin Peng, Peng Wu, Ling Gao, Kun Huang, Jiaben Chen, Laurent Kneip
We present a new solution to tracking and mapping with an event camera.
1 code implementation • 12 Apr 2021 • Yuxin Liang, Rui Cao, Jie Zheng, Jie Ren, Ling Gao
We train the weights on word similarity tasks and show that processed embedding is more isotropic.
no code implementations • 21 Oct 2018 • Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang
We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics.