Making the Invisible Visible: Action Recognition Through Walls and Occlusions

Understanding people's actions and interactions typically depends on seeing them. Automating the process of action recognition from visual data has been the topic of much research in the computer vision community. But what if it is too dark, or if the person is occluded or behind a wall? In this paper, we introduce a neural network model that can detect human actions through walls and occlusions, and in poor lighting conditions. Our model takes radio frequency (RF) signals as input, generates 3D human skeletons as an intermediate representation, and recognizes actions and interactions of multiple people over time. By translating the input to an intermediate skeleton-based representation, our model can learn from both vision-based and RF-based datasets, and allow the two tasks to help each other. We show that our model achieves comparable accuracy to vision-based action recognition systems in visible scenarios, yet continues to work accurately when people are not visible, hence addressing scenarios that are beyond the limit of today's vision-based action recognition.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Skeleton Based Action Recognition NTU RGB+D RF-Action Accuracy (CV) 91.6 # 78
Accuracy (CS) 86.8 # 62
Skeleton Based Action Recognition PKU-MMD RF-Action mAP@0.50 (CV) 94.4 # 1
mAP@0.50 (CS) 92.9 # 1
RF-based Pose Estimation RF-MMD RF-Action mAP (@0.1, Visible) 90.1 # 1
mAP (@0.1, Through-wall) 86.5 # 1

Methods


No methods listed for this paper. Add relevant methods here