Answering questions about why characters perform certain actions is central to understanding and reasoning about narratives. The actions people perform are steps of plans to achieve their desired goals. When interpreting language, humans naturally understand the reasons behind described actions, even when the reasons are left unstated. Despite recent progress in question answering, it is not clear if existing models can answer "why" questions that may require commonsense knowledge external to the input narrative.

The objective of the TellMeWhy task is to gauge the ability of current NLP models to reason about events in narratives by answering why questions. Here, the input is a 5-sentence short story and a why-question associated with it. The expected output is a free-form answer to the question. The answers to the questions in this dataset were crowd-sourced using Amazon Mechanical Turk. Three distinct annotators answered each question, and they were not allowed to copy pieces of text to make up an answer. To ensure the quality of these answers, the validity of a small, random subset of them was verified by another round of crowdsourcing.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages