Improving antibody language models with native pairing

28 Aug 2023  ·  Sarah M. Burbach, Bryan Briney ·

Current antibody language models are limited by their use of unpaired antibody sequence data and the biases in publicly available antibody sequence datasets, which are skewed toward antibodies against a relatively small number of pathogens. A recently published dataset (by Jaffe, et al) of approximately 1.6 x 10^6 natively paired human antibody sequences from healthy donors represents by far the largest dataset of its kind and offers a unique opportunity to evaluate how antibody language models can be improved by training with natively paired antibody sequence data. We trained two Baseline Antibody Language Models (BALM), using natively paired (BALM-paired) or unpaired (BALM-unpaired) sequences from the Jaffe dataset. We provide evidence that training with natively paired sequences substantially improves model performance and that this improvement results from the model learning immunologically relevant features that span the light and heavy chains. We also show that ESM-2, a state-of-the-art general protein language model, learns similar cross-chain features when fine-tuned with natively paired antibody sequence data.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here