Linguistic Acceptability

47 papers with code • 5 benchmarks • 5 datasets

Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.

Image Source: Warstadt et al

How to Train BERT with an Academic Budget

peteriz/academic-budget-bert EMNLP 2021

While large language models a la BERT are used ubiquitously in NLP, pretraining them is considered a luxury that only a few well-funded industry labs can afford.

300
15 Apr 2021

A Statistical Framework for Low-bitwidth Training of Deep Neural Networks

cjf00000/StatQuant NeurIPS 2020

We show that the FQT gradient is an unbiased estimator of the QAT gradient, and we discuss the impact of gradient quantization on its variance.

25
27 Oct 2020

Domain Adversarial Fine-Tuning as an Effective Regularizer

GeorgeVern/AFTERV1.0 Findings of the Association for Computational Linguistics 2020

To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer.

8
28 Sep 2020

GeDi: Generative Discriminator Guided Sequence Generation

salesforce/GeDi Findings (EMNLP) 2021

While large-scale language models (LMs) are able to imitate the distribution of natural language well enough to generate realistic text, it is difficult to control which regions of the distribution they generate.

208
14 Sep 2020

Big Bird: Transformers for Longer Sequences

huggingface/transformers NeurIPS 2020

To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear.

125,478
28 Jul 2020

Towards Debiasing Sentence Representations

pliang279/sent_debias ACL 2020

As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes.

55
16 Jul 2020

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

huggingface/transformers ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

125,478
05 Jun 2020

On the Robustness of Language Encoders against Grammatical Errors

uclanlp/ProbeGrammarRobustness ACL 2020

We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors.

10
12 May 2020