Lexical Simplification
19 papers with code • 0 benchmarks • 1 datasets
The goal of Lexical Simplification is to replace complex words (typically words that are used less often in language and are therefore less familiar to readers) with their simpler synonyms, without infringing the grammaticality and changing the meaning of the text.
Source: Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Benchmarks
These leaderboards are used to track progress in Lexical Simplification
Latest papers
Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity
In this work, we complement such distributional knowledge with external lexical knowledge, that is, we integrate the discrete knowledge on word-level semantic similarity into pretraining.
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Lexical Simplification with Pretrained Encoders
Lexical simplification (LS) aims to replace complex words in a given sentence with their simpler alternatives of equivalent meaning.
A Word-Complexity Lexicon and A Neural Readability Ranking Model for Lexical Simplification
Current lexical simplification approaches rely heavily on heuristics and corpus level features that do not always align with human judgment.
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space.
Exploring Neural Text Simplification Models
Unlike the previously proposed automated TS systems, our neural text simplification (NTS) systems are able to simultaneously perform lexical simplification and content reduction.