Search Results for author: Liangtao Shi

Found 3 papers, 1 papers with code

Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and Inference

no code implementations23 May 2024 Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang, Yi Xin, Quanjun Yin

Sparse-Tuning efficiently fine-tunes the pre-trained ViT by sparsely preserving the informative tokens and merging redundant ones, enabling the ViT to focus on the foreground while reducing computational costs on background regions in the images.

Autoregressive Queries for Adaptive Tracking with Spatio-TemporalTransformers

no code implementations15 Mar 2024 Jinxia Xie, Bineng Zhong, Zhiyi Mo, Shengping Zhang, Liangtao Shi, Shuxiang Song, Rongrong Ji

Firstly, we introduce a set of learnable and autoregressive queries to capture the instantaneous target appearance changes in a sliding window fashion.

Visual Tracking

Explicit Visual Prompts for Visual Object Tracking

1 code implementation6 Jan 2024 Liangtao Shi, Bineng Zhong, Qihua Liang, Ning li, Shengping Zhang, Xianxian Li

Specifically, we utilize spatio-temporal tokens to propagate information between consecutive frames without focusing on updating templates.

Object Visual Object Tracking +1

Cannot find the paper you are looking for? You can Submit a new open access paper.