Search Results for author: Shiao Wang

Found 5 papers, 5 papers with code

Mamba-FETrack: Frame-Event Tracking via State Space Model

2 code implementations28 Apr 2024 Ju Huang, Shiao Wang, Shuai Wang, Zhe Wu, Xiao Wang, Bo Jiang

Specifically, our Mamba-based tracker achieves 43. 5/55. 6 on the SR/PR metric, while the ViT-S based tracker (OSTrack) obtains 40. 0/50. 9.

Object Localization

State Space Model for New-Generation Network Alternative to Transformers: A Survey

1 code implementation15 Apr 2024 Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, YaoWei Wang, Yonghong Tian, Jin Tang

In this paper, we give the first comprehensive review of these works and also provide experimental comparisons and analysis to better demonstrate the features and advantages of SSM.

Long-term Frame-Event Visual Tracking: Benchmark Dataset and Baseline

4 code implementations9 Mar 2024 Xiao Wang, Ju Huang, Shiao Wang, Chuanming Tang, Bo Jiang, Yonghong Tian, Jin Tang, Bin Luo

Current event-/frame-event based trackers undergo evaluation on short-term tracking datasets, however, the tracking of real-world scenarios involves long-term tracking, and the performance of existing tracking algorithms in these scenarios remains unclear.

Object Tracking Rgb-T Tracking

Unleashing the Power of CNN and Transformer for Balanced RGB-Event Video Recognition

1 code implementation18 Dec 2023 Xiao Wang, Yao Rong, Shiao Wang, Yuan Chen, Zhe Wu, Bo Jiang, Yonghong Tian, Jin Tang

It is intuitive to combine them for high-performance RGB-Event based video recognition, however, existing works fail to achieve a good balance between the accuracy and model parameters, as shown in Fig.~\ref{firstimage}.

Video Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.