Linguistic Acceptability
47 papers with code • 5 benchmarks • 5 datasets
Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.
Image Source: Warstadt et al
Libraries
Use these libraries to find Linguistic Acceptability models and implementationsLatest papers with no code
Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT
Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.
DaLAJ - a dataset for linguistic acceptability judgments for Swedish: Format, baseline, sharing
We present DaLAJ 1. 0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version; and the initial experiment using it for the binary classification task.
CLEAR: Contrastive Learning for Sentence Representation
Pre-trained language models have proven their unique powers in capturing implicit language features.
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning
We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality.
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
In particular, we propose a new group-wise quantization scheme, and we use a Hessian based mix-precision method to compress the model further.
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering.
Linguistic Analysis of Pretrained Sentence Encoders with Acceptability Judgments
We use this analysis set to investigate the grammatical knowledge of three pretrained encoders: BERT (Devlin et al., 2018), GPT (Radford et al., 2018), and the BiLSTM baseline from Warstadt et al. We find that these models have a strong command of complex or non-canonical argument structures like ditransitives (Sue gave Dan a book) and passives (The book was read).
Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments
Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures?
Rating Distributions and Bayesian Inference: Enhancing Cognitive Models of Spatial Language Use
For these models, we propose an extension that simulates a full rating distribution (instead of average ratings) and allows generating individual ratings.