Coreference Resolution

254 papers with code • 15 benchmarks • 42 datasets

Coreference resolution is the task of clustering mentions in text that refer to the same underlying real world entities.

Example:

               +-----------+
               |           |
I voted for Obama because he was most aligned with my values", she said.
 |                                                 |            |
 +-------------------------------------------------+------------+

"I", "my", and "she" belong to the same cluster and "Obama" and "he" belong to the same cluster.

Most implemented papers

Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction

luanyi/DyGIE EMNLP 2018

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles.

Stanza: A Python Natural Language Processing Toolkit for Many Human Languages

stanfordnlp/stanza ACL 2020

We introduce Stanza, an open-source Python natural language processing toolkit supporting 66 human languages.

Finetuned Language Models Are Zero-Shot Learners

google-research/flan ICLR 2022

We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks.

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Scaling Instruction-Finetuned Language Models

google-research/flan 20 Oct 2022

We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation).

End-to-end Neural Coreference Resolution

kentonl/e2e-coref EMNLP 2017

We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector.

Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns

google-research-datasets/gap-coreference TACL 2018

Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge.

Gender Bias in Coreference Resolution

rudinger/winogender-schemas NAACL 2018

We present an empirical study of gender bias in coreference resolution systems.

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

vered1986/self_talk 24 Jul 2019

The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.

A Hybrid Neural Network Model for Commonsense Reasoning

namisan/mt-dnn WS 2019

An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers.