Learning Models for Actions and Person-Object Interactions with Transfer to Question Answering

16 Apr 2016  ·  Arun Mallya, Svetlana Lazebnik ·

This paper proposes deep convolutional network models that utilize local and global context to make human activity label predictions in still images, achieving state-of-the-art performance on two recent datasets with hundreds of labels each. We use multiple instance learning to handle the lack of supervision on the level of individual person instances, and weighted loss to handle unbalanced training data. Further, we show how specialized features trained on these datasets can be used to improve accuracy on the Visual Question Answering (VQA) task, in the form of multiple choice fill-in-the-blank questions (Visual Madlibs). Specifically, we tackle two types of questions on person activity and person-object relationship and show improvements over generic features trained on the ImageNet classification task.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human-Object Interaction Detection HICO Mallya & Lazebnik mAP 36.1 # 6

Methods


No methods listed for this paper. Add relevant methods here