Linguistic Acceptability

47 papers with code • 5 benchmarks • 5 datasets

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

Latest papers with no code

Using Integrated Gradients and Constituency Parse Trees to explain Linguistic Acceptability learnt by BERT

no code yet • ICON 2021

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

DaLAJ - a dataset for linguistic acceptability judgments for Swedish: Format, baseline, sharing

no code yet • 14 May 2021

We present DaLAJ 1. 0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version; and the initial experiment using it for the binary classification task.

CLEAR: Contrastive Learning for Sentence Representation

no code yet • 31 Dec 2020

Pre-trained language models have proven their unique powers in capturing implicit language features.

What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning

no code yet • 8 Nov 2019

We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality.

Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT

no code yet • 12 Sep 2019

In particular, we propose a new group-wise quantization scheme, and we use a Hessian based mix-precision method to compress the model further.

StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding

no code yet • ICLR 2020

Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering.

Linguistic Analysis of Pretrained Sentence Encoders with Acceptability Judgments

no code yet • 11 Jan 2019

We use this analysis set to investigate the grammatical knowledge of three pretrained encoders: BERT (Devlin et al., 2018), GPT (Radford et al., 2018), and the BiLSTM baseline from Warstadt et al. We find that these models have a strong command of complex or non-canonical argument structures like ditransitives (Sue gave Dan a book) and passives (The book was read).

Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments

no code yet • 11 Dec 2018

Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures?

Rating Distributions and Bayesian Inference: Enhancing Cognitive Models of Spatial Language Use

no code yet • WS 2018

For these models, we propose an extension that simulates a full rating distribution (instead of average ratings) and allows generating individual ratings.