Coreference Resolution
261 papers with code • 16 benchmarks • 43 datasets
Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.
Example:
+-----------+
| |
I voted for Obama because he was most aligned with my values", she said.
| | |
+-------------------------------------------------+------------+
"I", "my", and "she" belong to the same cluster and "Obama" and "he" belong to the same cluster.
Libraries
Use these libraries to find Coreference Resolution models and implementationsDatasets
Most implemented papers
WinoGrande: An Adversarial Winograd Schema Challenge at Scale
The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.
A Hybrid Neural Network Model for Commonsense Reasoning
An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers.
Multi-hop Question Answering via Reasoning Chains
Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.
An Annotated Dataset of Coreference in English Literature
We present in this work a new dataset of coreference annotations for works of literature in English, covering 29, 103 mentions in 210, 532 tokens from 100 works of fiction.
Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks
Long document coreference resolution remains a challenging task due to the large memory and runtime requirements of current models.
The MultiBERTs: BERT Reproductions for Robustness Analysis
Experiments with pre-trained models such as BERT are often based on a single checkpoint.
Ask Me Anything: A simple strategy for prompting language models
Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the model predictions, and therefore significant effort is dedicated towards designing a painstakingly "perfect prompt" for a task.
Hungry Hungry Hippos: Towards Language Modeling with State Space Models
First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.
Dynamic Entity Representations in Neural Language Models
Understanding a long document requires tracking how entities are introduced and evolve over time.
A Simple Method for Commonsense Reasoning
Commonsense reasoning is a long-standing challenge for deep learning.