Linguistic Acceptability
47 papers with code • 5 benchmarks • 5 datasets
Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.
Image Source: Warstadt et al
Libraries
Use these libraries to find Linguistic Acceptability models and implementationsLatest papers with no code
MELA: Multilingual Evaluation of Linguistic Acceptability
Recent benchmarks for Large Language Models (LLMs) have mostly focused on application-driven tasks such as complex reasoning and code generation, and this has led to a scarcity in purely linguistic evaluation of LLMs.
Data-Free Distillation of Language Model by Text-to-Text Transfer
Data-Free Knowledge Distillation (DFKD) plays a vital role in compressing the model when original training data is unavailable.
Not all layers are equally as important: Every Layer Counts BERT
This paper introduces a novel modification of the transformer architecture, tailored for the data-efficient pretraining of language models.
How well can machine-generated texts be identified and can language models be trained to avoid identification?
Shallow learning classifiers differ from human-based detection, especially when using higher temperature values during text generation, resulting in a lower detection rate.
Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and Baseline via Detection
Neural ranking models (NRMs) have undergone significant development and have become integral components of information retrieval (IR) systems.
A Neural-Symbolic Approach Towards Identifying Grammatically Correct Sentences
Through combining Classic with Modern AI, which involves the blending of grammatical and syntactical rules with language models, we effectively tackle the Corpus of Linguistic Acceptability (COLA), a task that shows whether or not a sequence of words is an English grammatical sentence.
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
This technical report briefly describes our JDExplore d-team's submission Vega v1 on the General Language Understanding Evaluation (GLUE) leaderboard, where GLUE is a collection of nine natural language understanding tasks, including question answering, linguistic acceptability, sentiment analysis, text similarity, paraphrase detection, and natural language inference.
Cross-Architecture Distillation Using Bidirectional CMOW Embeddings
We match or exceed the scores of ELMo, and only fall behind more expensive models on linguistic acceptability.
Revisiting the Uniform Information Density Hypothesis
The uniform information density (UID) hypothesis posits a preference among language users for utterances structured such that information is distributed uniformly across a signal.
An Automated Knowledge Mining and Document Classification System with Multi-model Transfer Learning
The performance of the proposed system has been evaluated by comparing with two robust baseline methods, BERT and BERT-CNN.