Search Results for author: Jingcheng Li

Found 2 papers, 1 papers with code

Distilled Mid-Fusion Transformer Networks for Multi-Modal Human Activity Recognition

no code implementations5 May 2023 Jingcheng Li, Lina Yao, Binghao Li, Claude Sammut

Then the knowledge distillation method is applied to transfer the learned representation from the teacher model to a simpler DMFT student model, which consists of a lite version of the multi-modal spatial-temporal transformer module, to produce the results.

Feature Engineering Human Activity Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.