Coreference Resolution
262 papers with code • 16 benchmarks • 43 datasets
Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.
Example:
+-----------+
| |
I voted for Obama because he was most aligned with my values", she said.
| | |
+-------------------------------------------------+------------+
"I", "my", and "she" belong to the same cluster and "Obama" and "he" belong to the same cluster.
Libraries
Use these libraries to find Coreference Resolution models and implementationsDatasets
Latest papers
RGAT: A Deeper Look into Syntactic Dependency Information for Coreference Resolution
Our experiments on a public Gendered Ambiguous Pronouns (GAP) dataset show that with the supervision learning of the syntactic dependency graph and without fine-tuning the entire BERT, we increased the F1-score of the previous best model (RGCN-with-BERT) from 80. 3% to 82. 5%, compared to the F1-score by single BERT embeddings from 78. 5% to 82. 5%.
Similarity-based Memory Enhanced Joint Entity and Relation Extraction
Document-level joint entity and relation extraction is a challenging information extraction problem that requires a unified approach where a single neural network performs four sub-tasks: mention detection, coreference resolution, entity classification, and relation extraction.
How Good is the Model in Model-in-the-loop Event Coreference Resolution Annotation?
Annotating cross-document event coreference links is a time-consuming and cognitively demanding task that can compromise annotation quality and efficiency.
GENTLE: A Genre-Diverse Multilayer Challenge Set for English NLP and Linguistic Evaluation
We evaluate state-of-the-art NLP systems on GENTLE and find severe degradation for at least some genres in their performance on all tasks, which indicates GENTLE's utility as an evaluation dataset for NLP systems.
Light Coreference Resolution for Russian with Hierarchical Discourse Features
Our best model employing rhetorical distance between mentions has ranked 1st on the development set (74. 6% F1) and 2nd on the test set (73. 3% F1) of the Shared Task.
Sentence-Incremental Neural Coreference Resolution
We propose a sentence-incremental neural coreference resolution system which incrementally builds clusters after marking mention boundaries in a shift-reduce method.
COMET-M: Reasoning about Multiple Events in Complex Sentences
We propose COMET-M (Multi-Event), an event-centric commonsense model capable of generating commonsense inferences for a target event within a complex sentence.
Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
We approach this question through the lens of the dual-process theory for human decision-making.
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Furthermore, we show that instruction tuning with CoT Collection allows LMs to possess stronger few-shot learning capabilities on 4 domain-specific tasks, resulting in an improvement of +2. 24% (Flan-T5 3B) and +2. 37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until the max length by a +13. 98% margin.
Are Large Language Models Robust Coreference Resolvers?
Recent work on extending coreference resolution across domains and languages relies on annotated data in both the target domain and language.