Search Results for author: Ekta Sood

Found 8 papers, 1 papers with code

Gaze-enhanced Crossmodal Embeddings for Emotion Recognition

no code implementations30 Apr 2022 Ahmed Abdou, Ekta Sood, Philipp Müller, Andreas Bulling

Emotional expressions are inherently multimodal -- integrating facial behavior, speech, and gaze -- but their automatic recognition is often limited to a single modality, e. g. speech during a phone call.

Emotion Classification Emotion Recognition

Multimodal Integration of Human-Like Attention in Visual Question Answering

no code implementations27 Sep 2021 Ekta Sood, Fabian Kögel, Philipp Müller, Dominike Thomas, Mihai Bace, Andreas Bulling

We present the Multimodal Human-like Attention Network (MULAN) - the first method for multimodal integration of human-like attention on image and text during training of VQA models.

Question Answering Visual Question Answering

VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering

no code implementations CoNLL (EMNLP) 2021 Ekta Sood, Fabian Kögel, Florian Strohm, Prajit Dhar, Andreas Bulling

We present VQA-MHUG - a novel 49-participant dataset of multimodal human gaze on both images and questions during visual question answering (VQA) collected using a high-speed eye tracker.

Question Answering Visual Question Answering

Neural Photofit: Gaze-based Mental Image Reconstruction

no code implementations ICCV 2021 Florian Strohm, Ekta Sood, Sven Mayer, Philipp Müller, Mihai Bâce, Andreas Bulling

The encoder extracts image features and predicts a neural activation map for each face looked at by a human observer.

Image Reconstruction

Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention

no code implementations NeurIPS 2020 Ekta Sood, Simon Tannert, Philipp Mueller, Andreas Bulling

A lack of corpora has so far limited advances in integrating human gaze data as a supervisory signal in neural attention mechanisms for natural language processing(NLP).

Paraphrase Generation Sentence +1

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

no code implementations CONLL 2020 Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling, Ngoc Thang Vu

We compare state of the art networks based on long short-term memory (LSTM), convolutional neural models (CNN) and XLNet Transformer architectures.

Machine Reading Comprehension

Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension

1 code implementation CONLL 2018 Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, Ngoc Thang Vu

We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset.

Machine Reading Comprehension Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.