PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents

13 Mar 2023  ·  Weixiong Lin, Ziheng Zhao, Xiaoman Zhang, Chaoyi Wu, Ya zhang, Yanfeng Wang, Weidi Xie ·

Foundation models trained on large-scale dataset gain a recent surge in CV and NLP. In contrast, development in biomedical domain lags far behind due to data scarcity. To address this issue, we build and release PMC-OA, a biomedical dataset with 1.6M image-caption pairs collected from PubMedCentral's OpenAccess subset, which is 8 times larger than before. PMC-OA covers diverse modalities or diseases, with majority of the image-caption samples aligned at finer-grained level, i.e., subfigure and subcaption. While pretraining a CLIP-style model on PMC-OA, our model named PMC-CLIP achieves state-of-the-art results on various downstream tasks, including image-text retrieval on ROCO, MedMNIST image classification, Medical VQA, i.e. +8.1% R@10 on image-text retrieval, +3.9% accuracy on image classification.

PDF Abstract

Datasets


Introduced in the Paper:

PMC-OA

Used in the Paper:

VQA-RAD SLAKE PMC-VQA MedICaT
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Medical Visual Question Answering PMC-VQA PMC-CLIP Accuracy 24.7 # 3
Visual Question Answering (VQA) PMC-VQA PMC-CLIP Accuracy 24.7 # 3
Medical Visual Question Answering VQA-RAD PMC-CLIP Close-ended Accuracy 84.0 # 5
Open-ended Accuracy 67.0 # 5
Overall Accuracy 77.6 # 4

Methods


No methods listed for this paper. Add relevant methods here