Activity Recognition

254 papers with code • 4 benchmarks • 29 datasets

Human Activity Recognition is the problem of identifying events performed by humans given a video input. It is formulated as a binary (or multiclass) classification problem of outputting activity class labels. Activity Recognition is an important problem with many societal applications including smart surveillance, video search/retrieval, intelligent robots, and other monitoring systems.

Source: Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters

Libraries

Use these libraries to find Activity Recognition models and implementations

Latest papers with no code

MESEN: Exploit Multimodal Data to Design Unimodal Human Activity Recognition with Few Labels

no code yet • 2 Apr 2024

Human activity recognition (HAR) will be an essential function of various emerging applications.

HARMamba: Efficient Wearable Sensor Human Activity Recognition Based on Bidirectional Selective SSM

no code yet • 29 Mar 2024

Wearable sensor-based human activity recognition (HAR) is a critical research domain in activity perception.

Emotion Recognition from the perspective of Activity Recognition

no code yet • 24 Mar 2024

In this paper, we treat emotion recognition from the perspective of action recognition by exploring the application of deep learning architectures specifically designed for action recognition, for continuous affect recognition.

CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR

no code yet • 22 Mar 2024

In recent years, emerging research on mobile sensing has led to novel scenarios that enhance daily life for humans, but dynamic usage conditions often result in performance degradation when systems are deployed in real-world settings.

Spatio-Temporal Proximity-Aware Dual-Path Model for Panoramic Activity Recognition

no code yet • 21 Mar 2024

Panoramic Activity Recognition (PAR) seeks to identify diverse human activities across different scales, from individual actions to social group and global activities in crowded panoramic scenes.

A Survey of IMU Based Cross-Modal Transfer Learning in Human Activity Recognition

no code yet • 17 Mar 2024

We also distinguish and expound on many related but inconsistently used terms in the literature, such as transfer learning, domain adaptation, representation learning, sensor fusion, and multimodal learning, and describe how cross-modal learning fits with all these concepts.

Generalized Relevance Learning Grassmann Quantization

no code yet • 14 Mar 2024

The proposed model returns a set of prototype subspaces and a relevance vector.

P2LHAP:Wearable sensor-based human activity recognition, segmentation and forecast through Patch-to-Label Seq2Seq Transformer

no code yet • 13 Mar 2024

Traditional deep learning methods struggle to simultaneously segment, recognize, and forecast human activities from sensor data.

Knowledge Transfer across Multiple Principal Component Analysis Studies

no code yet • 12 Mar 2024

In the first step, we integrate the shared subspace information across multiple studies by a proposed method named as Grassmannian barycenter, instead of directly performing PCA on the pooled dataset.

Deep Generative Domain Adaptation with Temporal Relation Knowledge for Cross-User Activity Recognition

no code yet • 12 Mar 2024

To bridge this gap, our study introduces a Conditional Variational Autoencoder with Universal Sequence Mapping (CVAE-USM) approach, which addresses the unique challenges of time-series domain adaptation in HAR by relaxing the i. i. d.