Search Results for author: Cassandra L. Jacobs

Found 8 papers, 2 papers with code

CMCL 2021 Shared Task on Eye-Tracking Prediction

no code implementations NAACL (CMCL) 2021 Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus

The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).

The Viability of Best-worst Scaling and Categorical Data Label Annotation Tasks in Detecting Implicit Bias

no code implementations NLPerspectives (LREC) 2022 Parker Glenn, Cassandra L. Jacobs, Marvin Thielk, Yi Chu

We identify several shortcomings of BWS relative to traditional categorical annotation: (1) When compared to categorical annotation, we estimate BWS takes approximately 4. 5x longer to complete; (2) BWS does not scale well to large annotation tasks with sparse target phenomena; (3) The high correlation between BWS and the traditional task shows that the benefits of BWS can be recovered from a simple categorically annotated, non-aggregated dataset.

The distribution of discourse relations within and across turns in spontaneous conversation

no code implementations7 Jul 2023 S. Magalí López Cortez, Cassandra L. Jacobs

Time pressure and topic negotiation may impose constraints on how people leverage discourse relations (DRs) in spontaneous conversational contexts.

Lost in Space Marking

no code implementations2 Aug 2022 Cassandra L. Jacobs, Yuval Pinter

We look at a decision taken early in training a subword tokenizer, namely whether it should be the word-initial token that carries a special mark, or the word-final one.

Will it Unblend?

1 code implementation SCiL 2021 Yuval Pinter, Cassandra L. Jacobs, Jacob Eisenstein

Natural language processing systems often struggle with out-of-vocabulary (OOV) terms, which do not appear in training data.

NYTWIT: A Dataset of Novel Words in the New York Times

1 code implementation COLING 2020 Yuval Pinter, Cassandra L. Jacobs, Max Bittker

We present baseline results for both uncontextual and contextual prediction of novelty class, showing that there is room for improvement even for state-of-the-art NLP systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.