1 code implementation • ICCV 2023 • Bruce X. B. Yu, Zhi Zhang, Yongxu Liu, Sheng-hua Zhong, Yan Liu, Chang Wen Chen
3D human pose lifting is one of the promising research directions toward the task where both estimated pose and ground truth pose data are used for training.
Ranked #1 on 3D Human Pose Estimation on HumanEva-I
no code implementations • 10 May 2023 • Bruce X. B. Yu, Jianlong Chang, Haixin Wang, Lingbo Liu, Shijie Wang, Zhiyu Wang, Junfan Lin, Lingxi Xie, Haojie Li, Zhouchen Lin, Qi Tian, Chang Wen Chen
With the surprising development of pre-trained visual foundation models, visual tuning jumped out of the standard modus operandi that fine-tunes the whole pre-trained model or just the fully connected layer.
1 code implementation • 3 Oct 2022 • Bruce X. B. Yu, Jianlong Chang, Lingbo Liu, Qi Tian, Chang Wen Chen
Towards this goal, we propose a framework with a unified view of PETL called visual-PETL (V-PETL) to investigate the effects of different PETL techniques, data scales of downstream domains, positions of trainable parameters, and other aspects affecting the trade-off.
no code implementations • 22 Aug 2022 • Lingbo Liu, Jianlong Chang, Bruce X. B. Yu, Liang Lin, Qi Tian, Chang-Wen Chen
Previous methods usually fine-tuned the entire networks for each specific dataset, which will be burdensome to store massive parameters of these networks.
no code implementations • Pattern Recognition 2021 • Bruce X. B. Yu, Yan Liu, Keith C. C. Chan, Qintai Yang, Xiaoying Wang
In this paper, we propose a two-task graph convolutional network (2T-GCN) to represent skeleton data for HAE tasks involving abnormality detection and quality evaluation.
Ranked #2 on Action Assessment on EHE
no code implementations • 29 Apr 2020 • Bruce X. B. Yu, Yan Liu, Keith C. C. Chan
The data-driven approach that learns an optimal representation of vision features like skeleton frames or RGB videos is currently a dominant paradigm for activity recognition.
no code implementations • 29 Apr 2020 • Bruce X. B. Yu, Yan Liu, Keith C. C. Chan
To do so, we propose a HAR method that consists of three steps: (i) data transformation involving the generation of new features based on transforming of raw data, (ii) feature extraction involving the learning of a classifier based on the AdaBoost algorithm and the use of training data consisting of the transformed features, and (iii) parameter determination and pattern recognition involving the determination of parameters based on the features generated in (ii) and the use of the parameters as training data for deep learning algorithms to be used to recognize human activities.