Action Localization

136 papers with code • 0 benchmarks • 3 datasets

Action Localization is finding the spatial and temporal co ordinates for an action in a video. An action localization model will identify which frame an action start and ends in video and return the x,y coordinates of an action. Further the co ordinates will change when the object performing action undergoes a displacement.

Libraries

Use these libraries to find Action Localization models and implementations

Latest papers with no code

Sub-action Prototype Learning for Point-level Weakly-supervised Temporal Action Localization

no code yet • 16 Sep 2023

Point-level weakly-supervised temporal action localization (PWTAL) aims to localize actions with only a single timestamp annotation for each action instance.

Cross-Video Contextual Knowledge Exploration and Exploitation for Ambiguity Reduction in Weakly Supervised Temporal Action Localization

no code yet • 24 Aug 2023

Further, the GKSA module is used to efficiently summarize and propagate the cross-video representative action knowledge in a learnable manner to promote holistic action patterns understanding, which in turn allows the generation of high-confidence pseudo-labels for self-learning, thus alleviating ambiguity in temporal localization.

Benchmarking Data Efficiency and Computational Efficiency of Temporal Action Localization Models

no code yet • 24 Aug 2023

This work explores and measures how current deep temporal action localization models perform in settings constrained by the amount of data or computational power.

Weakly-Supervised Action Localization by Hierarchically-structured Latent Attention Modeling

no code yet • ICCV 2023

To address this problem, we propose a novel attention-based hierarchically-structured latent model to learn the temporal variations of feature semantics.

A Survey on Video Moment Localization

no code yet • 13 Jun 2023

Video moment localization, also known as video moment retrieval, aiming to search a target segment within a video described by a given natural language query.

Action Sensitivity Learning for Temporal Action Localization

no code yet • ICCV 2023

Temporal action localization (TAL), which involves recognizing and locating action instances, is a challenging task in video understanding.

Learning Higher-order Object Interactions for Keypoint-based Video Understanding

no code yet • 16 May 2023

Specifically, KeyNet introduces the use of object based keypoint information to capture context in the scene.

Video-Specific Query-Key Attention Modeling for Weakly-Supervised Temporal Action Localization

no code yet • 7 May 2023

To better learn these action category queries, we exploit not only the features of the current input video but also the correlation between different videos through a novel video-specific action category query learner worked with a query similarity loss.

DeepSegmenter: Temporal Action Localization for Detecting Anomalies in Untrimmed Naturalistic Driving Videos

no code yet • 13 Apr 2023

Identifying unusual driving behaviors exhibited by drivers during driving is essential for understanding driver behavior and the underlying causes of crashes.