Implicatures
6 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Training Compute-Optimal Large Language Models
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.
Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition
We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences.
Interactive Acquisition of Fine-grained Visual Concepts by Exploiting Semantics of Generic Characterizations in Discourse
Interactive Task Learning (ITL) concerns learning about unforeseen domain concepts via natural interactions with human users.
Probing Large Language Models for Scalar Adjective Lexical Semantics and Scalar Diversity Pragmatics
In this study, we probe different families of Large Language Models such as GPT-4 for their knowledge of the lexical semantics of scalar adjectives and one specific aspect of their pragmatics, namely scalar diversity.