Activity Recognition
253 papers with code • 4 benchmarks • 29 datasets
Human Activity Recognition is the problem of identifying events performed by humans given a video input. It is formulated as a binary (or multiclass) classification problem of outputting activity class labels. Activity Recognition is an important problem with many societal applications including smart surveillance, video search/retrieval, intelligent robots, and other monitoring systems.
Source: Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters
Libraries
Use these libraries to find Activity Recognition models and implementationsDatasets
Subtasks
Most implemented papers
Understanding and Improving Deep Neural Network for Activity Recognition
After that, we extracted the significant features related to the activities and sent the features to the DNN-based fusion model, which improved the classification rate to 96. 1%.
Learning Actor Relation Graphs for Group Activity Recognition
To this end, we propose to build a flexible and efficient Actor Relation Graph (ARG) to simultaneously capture the appearance and position relation between actors.
Specifying Weight Priors in Bayesian Deep Neural Networks with Empirical Bayes
We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks.
Human activity recognition from skeleton poses
Human Action Recognition is an important task of Human Robot Interaction as cooperation between robots and humans requires that artificial agents recognise complex cues from the environment.
Convolutional Tensor-Train LSTM for Spatio-temporal Learning
Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation. However, existing methods still perform poorly on challenging video tasks such as long-term forecasting.
Gimme Signals: Discriminative signal encoding for multimodal activity recognition
We present a simple, yet effective and flexible method for action recognition supporting multiple sensor modalities.
Human Activity Recognition from Wearable Sensor Data Using Self-Attention
In this regard, the existing recurrent or convolutional or their hybrid models for activity recognition struggle to capture spatio-temporal context from the feature space of sensor reading sequence.
Sequential Weakly Labeled Multi-Activity Localization and Recognition on Wearable Sensors using Recurrent Attention Networks
Recently, several attention mechanisms are proposed to handle the weakly labeled human activity data, which do not require accurate data annotation.
3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning
3D human shape and pose estimation from monocular images has been an active area of research in computer vision, having a substantial impact on the development of new applications, from activity recognition to creating virtual avatars.
DANA: Dimension-Adaptive Neural Architecture for Multivariate Sensor Data
We introduce a dimension-adaptive pooling (DAP) layer that makes DNNs flexible and more robust to changes in sensor availability and in sampling rate.