1 code implementation • 23 May 2022 • Adam Liška, Tomáš Kočiský, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, Cyprien de Masson d'Autume, Tim Scholtes, Manzil Zaheer, Susannah Young, Ellen Gilsenan-McMahon, Sophia Austin, Phil Blunsom, Angeliki Lazaridou
Knowledge and language understanding of models evaluated through question answering (QA) has been usually studied on static snapshots of knowledge, like Wikipedia.
3 code implementations • ICLR 2020 • Gábor Melis, Tomáš Kočiský, Phil Blunsom
Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur.
1 code implementation • 4 Jul 2018 • Tiago Ramalho, Tomáš Kočiský, Frederic Besse, S. M. Ali Eslami, Gábor Melis, Fabio Viola, Phil Blunsom, Karl Moritz Hermann
Natural language processing has made significant inroads into learning the semantics of words through distributional approaches, however representations learnt via these methods fail to capture certain kinds of information implicit in the real world.
1 code implementation • ICLR 2019 • Gábor Melis, Charles Blundell, Tomáš Kočiský, Karl Moritz Hermann, Chris Dyer, Phil Blunsom
We show that dropout training is best understood as performing MAP estimation concurrently for a family of conditional models whose objectives are themselves lower bounded by the original dropout objective.
Ranked #24 on Language Modelling on Penn Treebank (Word Level)
2 code implementations • TACL 2018 • Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, Edward Grefenstette
Reading comprehension (RC)---in contrast to information retrieval---requires integrating information and reasoning about events, entities, and their relations across a full document.
Ranked #9 on Question Answering on NarrativeQA (BLEU-1 metric)
no code implementations • ICLR 2018 • Dirk Weissenborn, Tomáš Kočiský, Chris Dyer
Common-sense and background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, this knowledge must be acquired from training corpora during learning, and then it is static at test time.
Ranked #34 on Question Answering on TriviaQA
no code implementations • EMNLP 2016 • Tomáš Kočiský, Gábor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, Karl Moritz Hermann
We present a novel semi-supervised approach for sequence transduction and apply it to semantic parsing.
2 code implementations • ACL 2016 • Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom
Many language generation tasks require the production of text conditioned on both structured and unstructured inputs.
Ranked #10 on Code Generation on Django
7 code implementations • 22 Sep 2015 • Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Phil Blunsom
We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases.
Ranked #83 on Natural Language Inference on SNLI
11 code implementations • NeurIPS 2015 • Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom
Teaching machines to read natural language documents remains an elusive challenge.
Ranked #13 on Question Answering on CNN / Daily Mail
no code implementations • ACL 2014 • Tomáš Kočiský, Karl Moritz Hermann, Phil Blunsom
We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data.