Multimodal Activity Recognition

12 papers with code • 10 benchmarks • 7 datasets

This task has no description! Would you like to contribute one?

OPERAnet: A Multimodal Activity Recognition Dataset Acquired from Radio Frequency and Vision-based Sensors

rogetk/oddet 8 Oct 2021

This dataset can be exploited to advance WiFi and vision-based HAR, for example, using pattern recognition, skeletal representation, deep learning algorithms or other novel approaches to accurately recognize human activities.

9
08 Oct 2021

Fusion-GCN: Multimodal Action Recognition using Graph Convolutional Networks

mduhme/fusion-gcn 27 Sep 2021

In this paper, we present Fusion-GCN, an approach for multimodal action recognition using Graph Convolutional Networks (GCNs).

16
27 Sep 2021

Distilling Audio-Visual Knowledge by Compositional Contrastive Learning

yanbeic/CCL CVPR 2021

Having access to multi-modal cues (e. g. vision and audio) empowers some cognitive tasks to be done faster compared to learning from a single modality.

84
22 Apr 2021

Gimme Signals: Discriminative signal encoding for multimodal activity recognition

raphaelmemmesheimer/gimme_signals_action_recognition 13 Mar 2020

We present a simple, yet effective and flexible method for action recognition supporting multiple sensor modalities.

12
13 Mar 2020

Bayesian Hierarchical Dynamic Model for Human Action Recognition

rort1989/HDM CVPR 2019

Human action recognition remains as a challenging task partially due to the presence of large variations in the execution of action.

13
01 Jun 2019

AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures

tensorflow/models ICLR 2020

Learning to represent videos is a very challenging task both algorithmically and computationally.

72,249
30 May 2019

EV-Action: Electromyography-Vision Multi-Modal Action Dataset

wanglichenxj/EV-Action-Electromyography-Vision-Multi-Modal-Action-Dataset 20 Apr 2019

To make up this, we introduce a new, large-scale EV-Action dataset in this work, which consists of RGB, depth, electromyography (EMG), and two skeleton modalities.

17
20 Apr 2019

Cross-modal Learning by Hallucinating Missing Modalities in RGB-D Vision

ncgarcia/modality-distillation Multimodal Scene Understanding Algorithms, Applications and Deep Learning 2019

We report state-of-the-art or comparable results on video action recognition on the largest multimodal dataset available for this task, the NTU RGB+D, as well as on the UWA3DII and Northwestern-UCLA.

20
01 Jan 2019

Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition

open-mmlab/mmskeleton 23 Jan 2018

Dynamics of human body skeletons convey significant information for human action recognition.

2,850
23 Jan 2018

Moments in Time Dataset: one million videos for event understanding

zhoubolei/moments_models 9 Jan 2018

We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds.

354
09 Jan 2018