Search Results for author: Jeremy Reffin

Found 11 papers, 4 papers with code

Causal datasheet: An approximate guide to practically assess Bayesian networks in the real world

no code implementations12 Mar 2020 Bradley Butcher, Vincent S. Huang, Jeremy Reffin, Sema K. Sgaier, Grace Charles, Novi Quadrianto

Here we propose a causal extension to the datasheet concept proposed by Gebru et al (2018) to include approximate BN performance expectations for any given dataset.

Improving Semantic Composition with Offset Inference

1 code implementation ACL 2017 Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir

Count-based distributional semantic models suffer from sparsity due to unobserved but plausible co-occurrences in any text collection.

Semantic Composition

When a Red Herring in Not a Red Herring: Using Compositional Methods to Detect Non-Compositional Phrases

no code implementations EACL 2017 Julie Weeds, Thomas Kober, Jeremy Reffin, David Weir

Non-compositional phrases such as \textit{red herring} and weakly compositional phrases such as \textit{spelling bee} are an integral part of natural language (Sag, 2002).

One Representation per Word - Does it make Sense for Composition?

1 code implementation WS 2017 Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir

In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone.

Aligning Packed Dependency Trees: a theory of composition for distributional semantics

no code implementations CL 2016 David Weir, Julie Weeds, Jeremy Reffin, Thomas Kober

We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees.

Improving Sparse Word Representations with Distributional Inference for Semantic Composition

1 code implementation EMNLP 2016 Thomas Kober, Julie Weeds, Jeremy Reffin, David Weir

Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed.

Semantic Composition Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.