Egocentric Activity Recognition

14 papers with code • 2 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Egocentric Activity Recognition models and implementations
2 papers
3,831

WEAR: An Outdoor Sports Dataset for Wearable and Egocentric Activity Recognition

mariusbock/wear 11 Apr 2023

Though research has shown the complementarity of camera- and inertial-based data, datasets which offer both egocentric video and inertial-based sensor data remain scarce.

5
11 Apr 2023

Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning

Xu-Linfeng/UESTC_MMEA_CL_main 26 Jan 2023

However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition.

4
26 Jan 2023

Learning Video Representations from Large Language Models

facebookresearch/lavila CVPR 2023

We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs).

428
08 Dec 2022

Group Contextualization for Video Recognition

haoyanbin918/group-contextualization CVPR 2022

By utilizing calibrators to embed feature with four different kinds of contexts in parallel, the learnt representation is expected to be more resilient to diverse types of activities.

19
18 Mar 2022

Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos

facebookresearch/Ego-Exo CVPR 2021

We introduce an approach for pre-training egocentric video models using large-scale third-person video datasets.

32
16 Apr 2021

Integrating Human Gaze into Attention for Egocentric Activity Recognition

kylemin/Gaze-Attention 8 Nov 2020

In addition, we model the distribution of gaze fixations using a variational method.

22
08 Nov 2020

EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition

ekazakos/temporal-binding-network ICCV 2019

We focus on multi-modal fusion for egocentric action recognition, and propose a novel architecture for multi-modal temporal-binding, i. e. the combination of modalities within a range of temporal offsets.

103
22 Aug 2019

What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention

antoninofurnari/rulstm ICCV 2019

Our method is ranked first in the public leaderboard of the EPIC-Kitchens egocentric action anticipation challenge 2019.

127
22 May 2019

Large-scale weakly-supervised pre-training for video action recognition

microsoft/computervision-recipes CVPR 2019

Second, frame-based models perform quite well on action recognition; is pre-training for good image features sufficient or is pre-training for spatio-temporal features valuable for optimal transfer learning?

9,273
02 May 2019

Long-Term Feature Banks for Detailed Video Understanding

open-mmlab/mmaction2 CVPR 2019

To understand the world, we humans constantly need to relate the present to the past, and put events in context.

3,831
12 Dec 2018