Fine-grained Action Recognition
14 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Fine-grained Action Recognition
Latest papers
Understanding Long Videos in One Multimodal Language Model Pass
In addition to faster inference, we discover the resulting models to yield surprisingly good accuracy on long-video tasks, even with no video specific information.
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels.
Real-time Action Recognition for Fine-Grained Actions and The Hand Wash Dataset
In this paper we present a three-stream algorithm for real-time action recognition and a new dataset of handwash videos, with the intent of aligning action recognition with real-world constraints to yield effective conclusions.
Revealing Single Frame Bias for Video-and-Language Learning
Training an effective video-and-language model intuitively requires multiple frames as model inputs.
Video Pose Distillation for Few-Shot, Fine-Grained Sports Action Recognition
This leads to poor accuracy when downstream tasks, such as action recognition, depend on pose.
Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and Contrastive Meta-Learning
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications, whereas the data of rare fine-grained categories is very limited.
Sharing Pain: Using Pain Domain Transfer for Video Recognition of Low Grade Orthopedic Pain in Horses
Moreover, we present a human expert baseline for the problem, as well as an extensive empirical study of various domain transfer methods and of what is detected by the pain recognition method trained on clean experimental pain in the orthopedic dataset.
Few-shot Action Recognition with Prototype-centered Attentive Learning
Extensive experiments on four standard few-shot action benchmarks show that our method clearly outperforms previous state-of-the-art methods, with the improvement particularly significant (10+\%) on the most challenging fine-grained action recognition benchmark.
Attention-Based Context Aware Reasoning for Situation Recognition
However, existing query-based reasoning methods have not considered handling of inter-dependent queries which is a unique requirement of semantic role prediction in SR.
Multi-Modal Domain Adaptation for Fine-Grained Action Recognition
We then combine adversarial training with multi-modal self-supervision, showing that our approach outperforms other UDA methods by 3%.