Search Results for author: Tomáš Kočiský

Found 11 papers, 8 papers with code

Mogrifier LSTM

3 code implementations ICLR 2020 Gábor Melis, Tomáš Kočiský, Phil Blunsom

Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur.

Language Modelling

Encoding Spatial Relations from Natural Language

1 code implementation4 Jul 2018 Tiago Ramalho, Tomáš Kočiský, Frederic Besse, S. M. Ali Eslami, Gábor Melis, Fabio Viola, Phil Blunsom, Karl Moritz Hermann

Natural language processing has made significant inroads into learning the semantics of words through distributional approaches, however representations learnt via these methods fail to capture certain kinds of information implicit in the real world.

Pushing the bounds of dropout

1 code implementation ICLR 2019 Gábor Melis, Charles Blundell, Tomáš Kočiský, Karl Moritz Hermann, Chris Dyer, Phil Blunsom

We show that dropout training is best understood as performing MAP estimation concurrently for a family of conditional models whose objectives are themselves lower bounded by the original dropout objective.

Language Modelling

The NarrativeQA Reading Comprehension Challenge

2 code implementations TACL 2018 Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, Edward Grefenstette

Reading comprehension (RC)---in contrast to information retrieval---requires integrating information and reasoning about events, entities, and their relations across a full document.

Ranked #9 on Question Answering on NarrativeQA (BLEU-1 metric)

Information Retrieval Question Answering +2

Dynamic Integration of Background Knowledge in Neural NLU Systems

no code implementations ICLR 2018 Dirk Weissenborn, Tomáš Kočiský, Chris Dyer

Common-sense and background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, this knowledge must be acquired from training corpora during learning, and then it is static at test time.

Common Sense Reasoning Natural Language Inference +3

Reasoning about Entailment with Neural Attention

7 code implementations22 Sep 2015 Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Phil Blunsom

We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases.

Natural Language Inference

Learning Bilingual Word Representations by Marginalizing Alignments

no code implementations ACL 2014 Tomáš Kočiský, Karl Moritz Hermann, Phil Blunsom

We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.