Coreference Resolution

261 papers with code • 16 benchmarks • 43 datasets

Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.

Example:

               +-----------+
               |           |
I voted for Obama because he was most aligned with my values", she said.
 |                                                 |            |
 +-------------------------------------------------+------------+

"I", "my", and "she" belong to the same cluster and "Obama" and "he" belong to the same cluster.

Most implemented papers

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

vered1986/self_talk 24 Jul 2019

The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.

A Hybrid Neural Network Model for Commonsense Reasoning

namisan/mt-dnn WS 2019

An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers.

Multi-hop Question Answering via Reasoning Chains

soujanyarbhat/aNswER_multirc 7 Oct 2019

Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.

An Annotated Dataset of Coreference in English Literature

dbamman/litbank LREC 2020

We present in this work a new dataset of coreference annotations for works of literature in English, covering 29, 103 mentions in 210, 532 tokens from 100 works of fiction.

Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks

shtoshni92/long-doc-coref EMNLP 2020

Long document coreference resolution remains a challenging task due to the large memory and runtime requirements of current models.

The MultiBERTs: BERT Reproductions for Robustness Analysis

google-research/language ICLR 2022

Experiments with pre-trained models such as BERT are often based on a single checkpoint.

Ask Me Anything: A simple strategy for prompting language models

hazyresearch/ama_prompting 5 Oct 2022

Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the model predictions, and therefore significant effort is dedicated towards designing a painstakingly "perfect prompt" for a task.

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

hazyresearch/h3 28 Dec 2022

First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.

Dynamic Entity Representations in Neural Language Models

smartschat/cort EMNLP 2017

Understanding a long document requires tracking how entities are introduced and evolve over time.

A Simple Method for Commonsense Reasoning

tensorflow/models 7 Jun 2018

Commonsense reasoning is a long-standing challenge for deep learning.