How Context Affects Language Models' Factual Predictions

10 May 2020Fabio PetroniPatrick LewisAleksandra PiktusTim RocktäschelYuxiang WuAlexander H. MillerSebastian Riedel

When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering. However, storing factual knowledge in a fixed number of weights of a language model clearly has limitations... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper