Search Results for author: Yuecong Min

Found 5 papers, 3 papers with code

CoSign: Exploring Co-occurrence Signals in Skeleton-based Continuous Sign Language Recognition

no code implementations ICCV 2023 Peiqi Jiao, Yuecong Min, Yanan Li, Xiaotao Wang, Lei Lei, Xilin Chen

The co-occurrence signals (e. g., hand shape, facial expression, and lip pattern) play a critical role in Continuous Sign Language Recognition (CSLR).

Sign Language Recognition Visual Grounding

Visual Alignment Constraint for Continuous Sign Language Recognition

2 code implementations ICCV 2021 Yuecong Min, Aiming Hao, Xiujuan Chai, Xilin Chen

Specifically, the proposed VAC comprises two auxiliary losses: one focuses on visual features only, and the other enforces prediction alignment between the feature extractor and the alignment module.

Sign Language Recognition

Self-Mutual Distillation Learning for Continuous Sign Language Recognition

1 code implementation ICCV 2021 Aiming Hao, Yuecong Min, Xilin Chen

Currently, a typical network combination for CSLR includes a visual module, which focuses on spatial and short-temporal information, followed by a contextual module, which focuses on long-temporal information, and the Connectionist Temporal Classification (CTC) loss is adopted to train the network.

Knowledge Distillation Sign Language Recognition

An Efficient PointLSTM for Point Clouds Based Gesture Recognition

1 code implementation CVPR 2020 Yuecong Min, Yanxiao Zhang, Xiujuan Chai, Xilin Chen

The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer.

Hand Gesture Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.