Egocentric Activity Recognition

14 papers with code • 2 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Egocentric Activity Recognition models and implementations
2 papers
3,876

Latest papers with no code

MMG-Ego4D: Multi-Modal Generalization in Egocentric Action Recognition

no code yet • 12 May 2023

In this paper, we study a novel problem in egocentric action recognition, which we term as "Multimodal Generalization" (MMG).

Optical Flow Estimation in 360$^\circ$ Videos: Dataset, Model and Application

no code yet • 27 Jan 2023

Moreover, we present a novel Siamese representation Learning framework for Omnidirectional Flow (SLOF) estimation, which is trained in a contrastive manner via a hybrid loss that combines siamese contrastive and optical flow losses.

Domain Generalization through Audio-Visual Relative Norm Alignment in First Person Action Recognition

no code yet • 19 Oct 2021

First person action recognition is becoming an increasingly researched area thanks to the rising popularity of wearable cameras.

Egocentric Activity Recognition and Localization on a 3D Map

no code yet • 20 May 2021

Given a video captured from a first person perspective and the environment context of where the video is recorded, can we recognize what the person is doing and identify where the action occurs in the 3D space?

Egok360: A 360 Egocentric Kinetic Human Activity Video Dataset

no code yet • 15 Oct 2020

To bridge this gap, in this paper we propose a novel Egocentric (first-person) 360{\deg} Kinetic human activity video dataset (EgoK360).

Symbiotic Attention with Privileged Information for Egocentric Action Recognition

no code yet • 8 Feb 2020

Due to the large action vocabulary in egocentric video datasets, recent studies usually utilize a two-branch structure for action recognition, ie, one branch for verb classification and the other branch for noun classification.

Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors

no code yet • 14 Jan 2020

In this paper, we build on a concept of self-supervision by taking RGB frames as input to learn to predict both action concepts and auxiliary descriptors e. g., object descriptors.

On the Role of Event Boundaries in Egocentric Activity Recognition from Photostreams

no code yet • 2 Sep 2018

Event boundaries play a crucial role as a pre-processing step for detection, localization, and recognition tasks of human activities in videos.

Multi-modal Egocentric Activity Recognition using Audio-Visual Features

no code yet • 2 Jul 2018

In this work, we propose a new framework for egocentric activity recognition problem based on combining audio-visual features with multi-kernel learning (MKL) and multi-kernel boosting (MKBoost).

Egocentric Activity Recognition on a Budget

no code yet • CVPR 2018

Recent advances in embedded technology have enabled more pervasive machine learning.