Entity Typing
88 papers with code • 8 benchmarks • 12 datasets
Entity Typing is an important task in text analysis. Assigning types (e.g., person, location, organization) to mentions of entities in documents enables effective structured analysis of unstructured text corpora. The extracted type information can be used in a wide range of ways (e.g., serving as primitives for information extraction and knowledge base (KB) completion, and assisting question answering). Traditional Entity Typing systems focus on a small set of coarse types (typically fewer than 10). Recent studies work on a much larger set of fine-grained types which form a tree-structured hierarchy (e.g., actor as a subtype of artist, and artist is a subtype of person).
Source: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding
Image Credit: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding
Libraries
Use these libraries to find Entity Typing models and implementationsLatest papers
REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking
Extracting structured information from unstructured text is critical for many downstream NLP applications and is traditionally achieved by closed information extraction (cIE).
The Integration of Semantic and Structural Knowledge in Knowledge Graph Entity Typing
The Knowledge Graph Entity Typing (KGET) task aims to predict missing type annotations for entities in knowledge graphs.
Seed-Guided Fine-Grained Entity Typing in Science and Engineering Domains
In this paper, we study the task of seed-guided fine-grained entity typing in science and engineering domains, which takes the name and a few seed entities for each entity type as the only supervision and aims to classify new entity mentions into both seen and unseen types (i. e., those without seed entities).
Robust Few-Shot Named Entity Recognition with Boundary Discrimination and Correlation Purification
However, the present few-shot NER models assume that the labeled data are all clean without noise or outliers, and there are few works focusing on the robustness of the cross-domain transfer learning ability to textual adversarial attacks in Few-shot NER.
From Ultra-Fine to Fine: Fine-tuning Ultra-Fine Entity Typing Models to Fine-grained
We can simply fine-tune the previously trained model with a small number of examples annotated under this schema.
Calibrated Seq2seq Models for Efficient and Generalizable Ultra-fine Entity Typing
In this paper, we present CASENT, a seq2seq model designed for ultra-fine entity typing that predicts ultra-fine types with calibrated confidence scores.
Dense Retrieval as Indirect Supervision for Large-space Decision Making
Many discriminative natural language understanding (NLU) tasks have large label spaces.
GeoLM: Empowering Language Models for Geospatially Grounded Language Understanding
This paper introduces GeoLM, a geospatially grounded language model that enhances the understanding of geo-entities in natural language.
Learning to Correct Noisy Labels for Fine-Grained Entity Typing via Co-Prediction Prompt Tuning
Fine-grained entity typing (FET) is an essential task in natural language processing that aims to assign semantic types to entities in text.
Do Language Models Learn about Legal Entity Types during Pretraining?
Language Models (LMs) have proven their ability to acquire diverse linguistic knowledge during the pretraining phase, potentially serving as a valuable source of incidental supervision for downstream tasks.