ELECTRAMed: a new pre-trained language representation model for biomedical NLP

19 Apr 2021  ·  Giacomo Miolo, Giulio Mantoan, Carlotta Orsenigo ·

The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks. The most recent dominant approaches are domain-specific models, initialized with general-domain textual data and then trained on a variety of scientific corpora. However, it has been observed that for specialized domains in which large corpora exist, training a model from scratch with just in-domain knowledge may yield better results. Moreover, the increasing focus on the compute costs for pre-training recently led to the design of more efficient architectures, such as ELECTRA. In this paper, we propose a pre-trained domain-specific language model, called ELECTRAMed, suited for the biomedical field. The novel approach inherits the learning framework of the general-domain ELECTRA architecture, as well as its computational advantages. Experiments performed on benchmark datasets for several biomedical NLP tasks support the usefulness of ELECTRAMed, which sets the novel state-of-the-art result on the BC5CDR corpus for named entity recognition, and provides the best outcome in 2 over the 5 runs of the 7th BioASQ-factoid Challange for the question answering task.

PDF Abstract

Results from the Paper


Ranked #6 on Drug–drug Interaction Extraction on DDI extraction 2013 corpus (Micro F1 metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Named Entity Recognition (NER) BC5CDR ELECTRAMed F1 90.03 # 7
Relation Extraction ChemProt ELECTRAMed F1 72.94 # 11
Drug–drug Interaction Extraction DDI extraction 2013 corpus ELECTRAMed Micro F1 79.13 # 6
Named Entity Recognition (NER) NCBI-disease ELECTRAMed F1 87.54 # 20

Methods