Language models are good pathologists: using attention-based sequence reduction and text-pretrained transformers for efficient WSI classification

14 Nov 2022  ·  Juan I. Pisula, Katarzyna Bozek ·

In digital pathology, Whole Slide Image (WSI) analysis is usually formulated as a Multiple Instance Learning (MIL) problem. Although transformer-based architectures have been used for WSI classification, these methods require modifications to adapt them to specific challenges of this type of image data. Among these challenges is the amount of memory and compute required by deep transformer models to process long inputs, such as the thousands of image patches that can compose a WSI at $\times 10$ or $\times 20$ magnification. We introduce \textit{SeqShort}, a multi-head attention-based sequence shortening layer to summarize each WSI in a fixed- and short-sized sequence of instances, that allows us to reduce the computational costs of self-attention on long sequences, and to include positional information that is unavailable in other MIL approaches. Furthermore, we show that WSI classification performance can be improved when the downstream transformer architecture has been pre-trained on a large corpus of text data, and only fine-tuning less than 0.1\% of its parameters. We demonstrate the effectiveness of our method in lymph node metastases classification and cancer subtype classification tasks, without the need of designing a WSI-specific transformer nor doing in-domain pre-training, keeping a reduced compute budget and low number of trainable parameters.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here