no code implementations • 2 Jun 2023 • Virginia Fernandez, Pedro Sanchez, Walter Hugo Lopez Pinaya, Grzegorz Jacenków, Sotirios A. Tsaftaris, Jorge Cardoso
Knowledge distillation in neural networks refers to compressing a large model or dataset into a smaller version of itself.
1 code implementation • 12 Feb 2022 • Grzegorz Jacenków, Alison Q. O'Neil, Sotirios A. Tsaftaris
We use the indication field to drive better image classification, by taking a transformer network which is unimodally pre-trained on text (BERT) and fine-tuning it for multimodal classification of a dual image-text input.
1 code implementation • 21 Aug 2020 • Grzegorz Jacenków, Alison Q. O'Neil, Brian Mohr, Sotirios A. Tsaftaris
We evaluate the method on two datasets: a new CLEVR-Seg dataset where we segment objects based on location, and the ACDC dataset conditioned on cardiac phase and slice location within the volume.