Browse > Natural Language Processing > Language Modelling

Language Modelling

339 papers with code · Natural Language Processing

Language modeling is the task of predicting the next word or character in a document.

* indicates models using dynamic evaluation; where, at test time, models may adapt to seen tokens in order to improve performance on following tokens. (Mikolov et al., (2010), Kraus et al., (2017))

State-of-the-art leaderboards

Greatest papers with code

Exploring the Limits of Language Modeling

7 Feb 2016tensorflow/models

In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding.

LANGUAGE MODELLING

Semi-supervised Sequence Learning

NeurIPS 2015 tensorflow/models

In our experiments, we find that long short term memory recurrent networks after being pretrained with the two approaches are more stable and generalize better.

LANGUAGE MODELLING TEXT CLASSIFICATION

One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling

11 Dec 2013tensorflow/models

We propose a new benchmark corpus to be used for measuring progress in statistical language modeling.

LANGUAGE MODELLING

XLNet: Generalized Autoregressive Pretraining for Language Understanding

19 Jun 2019huggingface/pytorch-transformers

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Language Models are Unsupervised Multitask Learners

Preprint 2019 huggingface/pytorch-transformers

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 SOTA for Language Modelling on Text8 (using extra training data)

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION

Cross-lingual Language Model Pretraining

22 Jan 2019huggingface/pytorch-transformers

On unsupervised machine translation, we obtain 34. 3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU.

LANGUAGE MODELLING UNSUPERVISED MACHINE TRANSLATION

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

ICLR 2019 huggingface/pytorch-transformers

Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling.

LANGUAGE MODELLING

Universal Transformers

ICLR 2019 tensorflow/tensor2tensor

Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times.

LANGUAGE MODELLING LEARNING TO EXECUTE MACHINE TRANSLATION

Discrete Autoencoders for Sequence Models

ICLR 2018 tensorflow/tensor2tensor

We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space.

LANGUAGE MODELLING MACHINE TRANSLATION

Contextual String Embeddings for Sequence Labeling

COLING 2018 zalandoresearch/flair

Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.

CHUNKING LANGUAGE MODELLING NAMED ENTITY RECOGNITION PART-OF-SPEECH TAGGING WORD EMBEDDINGS