no code implementations • EMNLP (NLPOSS) 2020 • Vincent Warmerdam, Thomas Kober, Rachael Tatman
We introduce whatlies, an open source toolkit for visually inspecting word and sentence embeddings.
no code implementations • 5 Jul 2022 • Malihe Alikhani, Thomas Kober, Bashar Alhafni, Yue Chen, Mert Inan, Elizabeth Nielsen, Shahab Raji, Mark Steedman, Matthew Stone
Typologically diverse languages offer systems of lexical and grammatical aspect that allow speakers to focus on facets of event structure in ways that comport with the specific communicative setting and discourse constraints they face.
no code implementations • COLING 2020 • Thomas Kober, Malihe Alikhani, Matthew Stone, Mark Steedman
The interpretation of the lexical aspect of verbs in English plays a crucial role for recognizing textual entailment and learning discourse-level inferences.
1 code implementation • 22 Oct 2020 • Johannes E. M. Mosig, Shikib Mehri, Thomas Kober
We present STAR, a schema-guided task-oriented dialog dataset consisting of 127, 833 utterances and knowledge base queries across 5, 820 task-oriented dialogs in 13 domains that is especially designed to facilitate task and domain transfer learning in task-oriented dialog.
1 code implementation • 4 Sep 2020 • Vincent D. Warmerdam, Thomas Kober, Rachael Tatman
We introduce whatlies, an open source toolkit for visually inspecting word and sentence embeddings.
1 code implementation • EACL 2021 • Thomas Kober, Julie Weeds, Lorenzo Bertolini, David Weir
The automatic detection of hypernymy relationships represents a challenging problem in NLP.
1 code implementation • WS 2019 • Thomas Kober, Sander Bijl de Vroe, Mark Steedman
Inferences regarding "Jane's arrival in London" from predications such as "Jane is going to London" or "Jane has gone to London" depend on tense and aspect of the predications.
1 code implementation • ACL 2017 • Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir
Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection.
no code implementations • EACL 2017 • Julie Weeds, Thomas Kober, Jeremy Reffin, David Weir
Non-compositional phrases such as \textit{red herring} and weakly compositional phrases such as \textit{spelling bee} are an integral part of natural language (Sag, 2002).
1 code implementation • WS 2017 • Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir
In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone.
no code implementations • CL 2016 • David Weir, Julie Weeds, Jeremy Reffin, Thomas Kober
We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees.
1 code implementation • EMNLP 2016 • Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir
Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed.