Search Results for author: Xiyang Wang

Found 9 papers, 4 papers with code

Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections

1 code implementation18 Sep 2023 Ting Meng, Chunyun Fu, Mingguang Huang, Xiyang Wang, JiaWei He, Tao Huang, Wankai Shi

However, in terms of the detection confidence fusing classification and localization, objects of low detection confidence may have inaccurate localization but clear appearance; similarly, objects of high detection confidence may have inaccurate localization or unclear appearance; yet these objects are not further classified.

Multi-Object Tracking

You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking

1 code implementation18 Apr 2023 Xiyang Wang, Chunyun Fu, JiaWei He, Mingguang Huang, Ting Meng, Siyu Zhang, Hangning Zhou, Ziyao Xu, Chi Zhang

In the classical tracking-by-detection (TBD) paradigm, detection and tracking are separately and sequentially conducted, and data association must be properly performed to achieve satisfactory tracking performance.

3D Multi-Object Tracking Object +3

3D Multi-Object Tracking Based on Uncertainty-Guided Data Association

1 code implementation3 Mar 2023 JiaWei He, Chunyun Fu, Xiyang Wang

In the existing literature, most 3D multi-object tracking algorithms based on the tracking-by-detection framework employed deterministic tracks and detections for similarity calculation in the data association stage.

3D Multi-Object Tracking

DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association

1 code implementation24 Feb 2022 Xiyang Wang, Chunyun Fu, Zhankun Li, Ying Lai, JiaWei He

This association mechanism realizes tracking of an object in a 2D domain when the object is far away and only detected by the camera, and updating of the 2D trajectory with 3D information obtained when the object appears in the LiDAR field of view to achieve a smooth fusion of 2D and 3D trajectories.

3D Multi-Object Tracking Object

Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views

no code implementations8 Aug 2021 Zhizhong Han, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

To mine highly discriminative information from unordered views, HVP performs a novel hierarchical view prediction over a view pair, and aggregates the knowledge learned from the predictions in all view pairs into a global feature.

Retrieval

3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention

no code implementations17 May 2019 Zhizhong Han, Xiyang Wang, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, C. L. Philip Chen

Then, the content and spatial information of each pair of view nodes are encoded by a novel spatial pattern correlation, where the correlation is computed among latent semantic patterns.

Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences

no code implementations7 Nov 2018 Zhizhong Han, Mingyang Shang, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

A recent method employs 3D voxels to represent 3D shapes, but this limits the approach to low resolutions due to the computational cost caused by the cubic complexity of 3D voxels.

3D Shape Representation Cross-Modal Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.