Multimodal Activity Recognition

12 papers with code • 10 benchmarks • 7 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition

yysijie/st-gcn 23 Jan 2018

Dynamics of human body skeletons convey significant information for human action recognition.

Temporal Segment Networks: Towards Good Practices for Deep Action Recognition

yjxiong/temporal-segment-networks 2 Aug 2016

The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network.

Moments in Time Dataset: one million videos for event understanding

zhoubolei/moments_models 9 Jan 2018

We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds.

AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures

tensorflow/models ICLR 2020

Learning to represent videos is a very challenging task both algorithmically and computationally.

Gimme Signals: Discriminative signal encoding for multimodal activity recognition

airglow/gimme_signals_action_recognition 13 Mar 2020

We present a simple, yet effective and flexible method for action recognition supporting multiple sensor modalities.

Interpretable 3D Human Action Analysis with Temporal Convolutional Networks

TaeSoo-Kim/TCNActionRecognition 14 Apr 2017

In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition.

Cross-modal Learning by Hallucinating Missing Modalities in RGB-D Vision

ncgarcia/modality-distillation Multimodal Scene Understanding Algorithms, Applications and Deep Learning 2019

We report state-of-the-art or comparable results on video action recognition on the largest multimodal dataset available for this task, the NTU RGB+D, as well as on the UWA3DII and Northwestern-UCLA.

EV-Action: Electromyography-Vision Multi-Modal Action Dataset

wanglichenxj/EV-Action-Electromyography-Vision-Multi-Modal-Action-Dataset 20 Apr 2019

To make up this, we introduce a new, large-scale EV-Action dataset in this work, which consists of RGB, depth, electromyography (EMG), and two skeleton modalities.

Bayesian Hierarchical Dynamic Model for Human Action Recognition

rort1989/HDM CVPR 2019

Human action recognition remains as a challenging task partially due to the presence of large variations in the execution of action.

Distilling Audio-Visual Knowledge by Compositional Contrastive Learning

yanbeic/CCL CVPR 2021

Having access to multi-modal cues (e. g. vision and audio) empowers some cognitive tasks to be done faster compared to learning from a single modality.