BioFLAIR: Pretrained Pooled Contextualized Embeddings for Biomedical Sequence Labeling Tasks

13 Aug 2019  ·  Shreyas Sharma, Ron Daniel Jr ·

Biomedical Named Entity Recognition (NER) is a challenging problem in biomedical information processing due to the widespread ambiguity of out of context terms and extensive lexical variations. Performance on bioNER benchmarks continues to improve due to advances like BERT, GPT, and XLNet. FLAIR (1) is an alternative embedding model which is less computationally intensive than the others mentioned. We test FLAIR and its pretrained PubMed embeddings (which we term BioFLAIR) on a variety of bio NER tasks and compare those with results from BERT-type networks. We also investigate the effects of a small amount of additional pretraining on PubMed content, and of combining FLAIR and ELMO models. We find that with the provided embeddings, FLAIR performs on-par with the BERT networks - even establishing a new state of the art on one benchmark. Additional pretraining did not provide a clear benefit, although this might change with even more pretraining being done. Stacking the FLAIR embeddings with others typically does provide a boost in the benchmark results.

PDF Abstract

Results from the Paper


Ranked #3 on Named Entity Recognition (NER) on Species-800 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Named Entity Recognition (NER) BC5CDR BioFLAIR F1 89.42 # 10
Named Entity Recognition (NER) JNLPBA BioFLAIR F1 77.03 # 15
Named Entity Recognition (NER) LINNAEUS BioFLAIR F1 87.02 # 3
Named Entity Recognition (NER) NCBI-disease BioFLAIR F1 88.85 # 11
Named Entity Recognition (NER) Species-800 BioFLAIR F1 82.44 # 3

Methods