|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
Ranked #1 on Sentence Classification on ScienceCite (using extra training data)
CITATION INTENT CLASSIFICATION DEPENDENCY PARSING LANGUAGE MODELLING MEDICAL NAMED ENTITY RECOGNITION PARTICIPANT INTERVENTION COMPARISON OUTCOME EXTRACTION RELATION EXTRACTION SENTENCE CLASSIFICATION
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP.
Identifying the intent of a citation in scientific papers (e. g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature.
Ranked #1 on Sentence Classification on ACL-ARC