Coreference Resolution
258 papers with code • 16 benchmarks • 43 datasets
Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.
Example:
+-----------+
| |
I voted for Obama because he was most aligned with my values", she said.
| | |
+-------------------------------------------------+------------+
"I", "my", and "she" belong to the same cluster and "Obama" and "he" belong to the same cluster.
Libraries
Use these libraries to find Coreference Resolution models and implementationsDatasets
Latest papers
CHAMP: Efficient Annotation and Consolidation of Cluster Hierarchies
Various NLP tasks require a complex hierarchical structure over nodes, where each node is a cluster of items.
Investigating Multilingual Coreference Resolution by Universal Annotations
Multilingual coreference resolution (MCR) has been a long-standing and challenging task.
CorefPrompt: Prompt-based Event Coreference Resolution by Measuring Event Type and Argument Compatibilities
Event coreference resolution (ECR) aims to group event mentions referring to the same real-world event into clusters.
Seq2seq is All You Need for Coreference Resolution
Existing works on coreference resolution suggest that task-specific models are necessary to achieve state-of-the-art performance.
Semi-supervised multimodal coreference resolution in image narrations
In this paper, we study multimodal coreference resolution, specifically where a longer descriptive text, i. e., a narration is paired with an image.
CAW-coref: Conjunction-Aware Word-level Coreference Resolution
State-of-the-art coreference resolutions systems depend on multiple LLM calls per document and are thus prohibitively expensive for many use cases (e. g., information extraction with large corpora).
Incorporating Singletons and Mention-based Features in Coreference Resolution via Multi-task Learning for Better Generalization
Previous attempts to incorporate a mention detection step into end-to-end neural coreference resolution for English have been hampered by the lack of singleton mention span data as well as other entity information.
Collecting Visually-Grounded Dialogue with A Game Of Sorts
We address these concerns by introducing a collaborative image ranking task, a grounded agreement game we call "A Game Of Sorts".
RGAT: A Deeper Look into Syntactic Dependency Information for Coreference Resolution
Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) from 80. 3% to 82. 5%, compared to the F1-score by single BERT embeddings from 78. 5% to 82. 5%.
Similarity-based Memory Enhanced Joint Entity and Relation Extraction
Document-level joint entity and relation extraction is a challenging information extraction problem that requires a unified approach where a single neural network performs four sub-tasks: mention detection, coreference resolution, entity classification, and relation extraction.