Linguistic Acceptability
47 papers with code • 5 benchmarks • 5 datasets
Linguistic Acceptability is the task of determining whether a sentence is grammatical or ungrammatical.
Image Source: Warstadt et al
Libraries
Use these libraries to find Linguistic Acceptability models and implementationsLatest papers
How to Train BERT with an Academic Budget
While large language models a la BERT are used ubiquitously in NLP, pretraining them is considered a luxury that only a few well-funded industry labs can afford.
RealFormer: Transformer Likes Residual Attention
Transformer is the backbone of modern NLP models.
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
We show that the FQT gradient is an unbiased estimator of the QAT gradient, and we discuss the impact of gradient quantization on its variance.
Domain Adversarial Fine-Tuning as an Effective Regularizer
To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer.
GeDi: Generative Discriminator Guided Sequence Generation
While large-scale language models (LMs) are able to imitate the distribution of natural language well enough to generate realistic text, it is difficult to control which regions of the distribution they generate.
Big Bird: Transformers for Longer Sequences
To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear.
Towards Debiasing Sentence Representations
As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes.
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
Humans read and write hundreds of billions of messages every day.
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
On the Robustness of Language Encoders against Grammatical Errors
We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors.