Action Recognition

881 papers with code • 49 benchmarks • 105 datasets

Action Recognition is a computer vision task that involves recognizing human actions in videos or images. The goal is to classify and categorize the actions being performed in the video or image into a predefined set of action classes.

In the video domain, it is an open question whether training an action classification network on a sufficiently large dataset, will give a similar boost in performance when applied to a different temporal task or dataset. The challenges of building video datasets has meant that most popular benchmarks for action recognition are small, having on the order of 10k videos.

Please note some benchmarks may be located in the Action Classification or Video Classification tasks, e.g. Kinetics-400.

Libraries

Use these libraries to find Action Recognition models and implementations
20 papers
3,888
10 papers
2,987
4 papers
550
See all 8 libraries.

Latest papers with no code

In My Perspective, In My Hands: Accurate Egocentric 2D Hand Pose and Action Recognition

no code yet • 14 Apr 2024

Our study aims to fill this research gap by exploring the field of 2D hand pose estimation for egocentric action recognition, making two contributions.

Exploring Explainability in Video Action Recognition

no code yet • 13 Apr 2024

To address these, we introduce Video-TCAV, by building on TCAV for Image Classification tasks, which aims to quantify the importance of specific concepts in the decision-making process of Video Action Recognition models.

Multimodal Attack Detection for Action Recognition Models

no code yet • 13 Apr 2024

In addition, we analyze our method's real-time performance with different hardware setups to demonstrate its potential as a practical defense mechanism.

MSSTNet: A Multi-Scale Spatio-Temporal CNN-Transformer Network for Dynamic Facial Expression Recognition

no code yet • 12 Apr 2024

Our approach takes spatial features of different scales extracted by CNN and feeds them into a Multi-scale Embedding Layer (MELayer).

Simba: Mamba augmented U-ShiftGCN for Skeletal Action Recognition in Videos

no code yet • 11 Apr 2024

These spatial features then undergo intermediate temporal modeling facilitated by the Mamba block before progressing to the encoder section, which comprises vanilla upsampling Shift S-GCN blocks.

Fine-Grained Side Information Guided Dual-Prompts for Zero-Shot Skeleton Action Recognition

no code yet • 11 Apr 2024

However, previous works focus on establishing the bridges between the known skeleton representation space and semantic descriptions space at the coarse-grained level for recognizing unknown action categories, ignoring the fine-grained alignment of these two spaces, resulting in suboptimal performance in distinguishing high-similarity action categories.

O-TALC: Steps Towards Combating Oversegmentation within Online Action Segmentation

no code yet • 10 Apr 2024

In order to facilitate online action segmentation on a stream of incoming video data, we introduce two methods for improved training and inference of backbone action recognition models, allowing them to be deployed directly for online frame level classification.

An Animation-based Augmentation Approach for Action Recognition from Discontinuous Video

no code yet • 10 Apr 2024

(3) We achieve the same performance with only 10% of the original data for training as with all of the original data from the real-world dataset, and a better performance on In-the-wild videos, by employing our data augmentation techniques.

X-VARS: Introducing Explainability in Football Refereeing with Multi-Modal Large Language Model

no code yet • 7 Apr 2024

The rapid advancement of artificial intelligence has led to significant improvements in automated decision-making.

Learning Correlation Structures for Vision Transformers

no code yet • 5 Apr 2024

We introduce a new attention mechanism, dubbed structural self-attention (StructSA), that leverages rich correlation patterns naturally emerging in key-query interactions of attention.