Chunking, also known as shallow parsing, identifies continuous spans of tokens that form syntactic units such as noun phrases or verb phrases.
Example:
Vinken | , | 61 | years | old |
---|---|---|---|---|
B-NLP | I-NP | I-NP | I-NP | I-NP |
Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.
CHUNKING LANGUAGE MODELLING NAMED ENTITY RECOGNITION PART-OF-SPEECH TAGGING WORD EMBEDDINGS
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
Ranked #27 on
Named Entity Recognition
on CoNLL 2003 (English)
This paper describes NCRF++, a toolkit for neural sequence labeling.
Ranked #7 on
Chunking
on Penn Treebank
We investigate the design challenges of constructing effective and efficient neural sequence labeling systems, by reproducing twelve neural sequence labeling models, which include most of the state-of-the-art structures, and conduct a systematic model comparison on three benchmarks (i. e. NER, Chunking, and POS tagging).
Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance.
It can also use sentence level tag information thanks to a CRF layer.
We propose a sequence labeling framework with a secondary training objective, learning to predict surrounding words for every word in the dataset.
Ranked #3 on
Grammatical Error Detection
on FCE
CHUNKING GRAMMATICAL ERROR DETECTION LANGUAGE MODELLING NAMED ENTITY RECOGNITION PART-OF-SPEECH TAGGING
JointKPE employs a chunking network to identify high-quality phrases and a ranking network to learn their salience in the document.