no code implementations • 10 Dec 2023 • Nyle Siddiqui, Praveen Tirupattur, Mubarak Shah
In this work, we present a novel approach to multi-view action recognition where we guide learned action representations to be separated from view-relevant information in a video.
Ranked #1 on Action Recognition on N-UCLA
no code implementations • 17 Apr 2022 • Rajat Modi, Aayush Jung Rana, Akash Kumar, Praveen Tirupattur, Shruti Vyas, Yogesh Singh Rawat, Mubarak Shah
Beyond possessing large enough size to feed data hungry machines (eg, transformers), what attributes measure the quality of a dataset?
no code implementations • 18 Sep 2021 • Praveen Tirupattur, Christian Schulze, Andreas Dengel
To address this issue, an approach to automatically detect violent content in videos is proposed in this work.
1 code implementation • 24 Jul 2021 • Praveen Tirupattur, Aayush J Rana, Tushar Sangam, Shruti Vyas, Yogesh S Rawat, Mubarak Shah
While various approaches have been shown effective for recognition task in recent works, they often do not deal with videos of lower resolution where the action is happening in a tiny region.
1 code implementation • CVPR 2021 • Praveen Tirupattur, Kevin Duarte, Yogesh Rawat, Mubarak Shah
We propose to improve action localization performance by modeling these action dependencies in a novel attention-based Multi-Label Action Dependency (MLAD)layer.
Ranked #1 on Action Detection on Multi-THUMOS
no code implementations • 23 Apr 2020 • Mamshad Nayeem Rizve, Ugur Demir, Praveen Tirupattur, Aayush Jung Rana, Kevin Duarte, Ishan Dave, Yogesh Singh Rawat, Mubarak Shah
For tubelet extraction, we propose a localization network which takes a video clip as input and spatio-temporally detects potential foreground regions at multiple scales to generate action tubelets.