Embracing data abundance: BookTest Dataset for Reading Comprehension

4 Oct 2016  ·  Ondrej Bajgar, Rudolf Kadlec, Jan Kleindienst ·

There is a practically unlimited amount of natural language data available. Still, recent work in text comprehension has focused on datasets which are small relative to current computing possibilities. This article is making a case for the community to move to larger data and as a step in that direction it is proposing the BookTest, a new dataset similar to the popular Children's Book Test (CBT), however more than 60 times larger. We show that training on the new data improves the accuracy of our Attention-Sum Reader model on the original CBT test data by a much larger margin than many recent attempts to improve the model architecture. On one version of the dataset our ensemble even exceeds the human baseline provided by Facebook. We then show in our own human study that there is still space for further improvement.

PDF Abstract

Datasets


Introduced in the Paper:

BookTest

Used in the Paper:

LAMBADA CBT Children's Book Test StoryCloze

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here