Relation Extraction
671 papers with code • 50 benchmarks • 74 datasets
Relation Extraction is the task of predicting attributes and relations for entities in a sentence. For example, given a sentence “Barack Obama was born in Honolulu, Hawaii.”, a relation classifier aims at predicting the relation of “bornInCity”. Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.
Source: Deep Residual Learning for Weakly-Supervised Relation Extraction
Libraries
Use these libraries to find Relation Extraction models and implementationsDatasets
Subtasks
- Relation Classification
- Document-level Relation Extraction
- Joint Entity and Relation Extraction
- Temporal Relation Extraction
- Temporal Relation Extraction
- Dialog Relation Extraction
- Relationship Extraction (Distant Supervised)
- Continual Relation Extraction
- Binary Relation Extraction
- Zero-shot Relation Triplet Extraction
- 4-ary Relation Extraction
- DrugProt
- Hyper-Relational Extraction
- relation explanation
- Multi-Labeled Relation Extraction
- Relation Mention Extraction
Latest papers
EGTR: Extracting Graph from Transformer for Scene Graph Generation
We propose a lightweight one-stage SGG model that extracts the relation graph from the various relationships learned in the multi-head self-attention layers of the DETR decoder.
READ: Improving Relation Extraction from an ADversarial Perspective
This strategy enables a larger attack budget for entities and coaxes the model to leverage relational patterns embedded in the context.
MetaIE: Distilling a Meta Model from LLM for All Kinds of Information Extraction Tasks
We construct the distillation dataset via sampling sentences from language model pre-training datasets (e. g., OpenWebText in our implementation) and prompting an LLM to identify the typed spans of "important information".
AutoRE: Document-Level Relation Extraction with Large Language Models
Large Language Models (LLMs) have demonstrated exceptional abilities in comprehending and generating text, motivating numerous researchers to utilize them for Information Extraction (IE) purposes, including Relation Extraction (RE).
Extracting Protein-Protein Interactions (PPIs) from Biomedical Literature using Attention-based Relational Context Information
On the other hand, machine learning methods to automate PPI knowledge extraction from the scientific literature have been limited by a shortage of appropriate annotated data.
FCDS: Fusing Constituency and Dependency Syntax into Document-Level Relation Extraction
State-of-the-art DocRE methods use a graph structure to connect entities across the document to capture dependency syntax information.
CODE-ACCORD: A Corpus of Building Regulatory Data for Rule Generation towards Automatic Compliance Checking
CODE-ACCORD comprises 862 self-contained sentences extracted from the building regulations of England and Finland.
Extracting Polymer Nanocomposite Samples from Full-Length Documents
This paper investigates the use of large language models (LLMs) for extracting sample lists of polymer nanocomposites (PNCs) from full-length materials science research papers.
DistALANER: Distantly Supervised Active Learning Augmented Named Entity Recognition in the Open Source Software Ecosystem
With the AI revolution in place, the trend for building automated systems to support professionals in different domains such as the open source software systems, healthcare systems, banking systems, transportation systems and many others have become increasingly prominent.
Making Pre-trained Language Models Better Continual Few-Shot Relation Extractors
Continual Few-shot Relation Extraction (CFRE) is a practical problem that requires the model to continuously learn novel relations while avoiding forgetting old ones with few labeled training data.