no code implementations • 22 Oct 2022 • Arijit Sehanobish, Kawshik Kannan, Nabila Abraham, Anasuya Das, Benjamin Odry
Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP).
no code implementations • NAACL (ACL) 2022 • Arijit Sehanobish, McCullen Sandora, Nabila Abraham, Jayashri Pawar, Danielle Torres, Anasuya Das, Murray Becker, Richard Herzog, Benjamin Odry, Ron Vianu
Pretrained Transformer based models finetuned on domain specific corpora have changed the landscape of NLP.
2 code implementations • 4 Jun 2019 • Naimul Mefraz Khan, Marcia Hon, Nabila Abraham
In this paper, we attempt solving these issues with transfer learning, where the state-of-the-art VGG architecture is initialized with pre-trained weights from large benchmark datasets consisting of natural images.
no code implementations • 13 Feb 2019 • Naimul Mefraz Khan, Nabila Abraham, Ling Guan
In this paper, we highlight three issues that limit performance of machine learning on biomedical images, and tackle them through 3 case studies: 1) Interactive Machine Learning (IML): we show how IML can drastically improve exploration time and quality of direct volume rendering.
6 code implementations • 18 Oct 2018 • Nabila Abraham, Naimul Mefraz Khan
We propose a generalized focal loss function based on the Tversky index to address the issue of data imbalance in medical image segmentation.
Ranked #1 on Lesion Segmentation on BUS 2017 Dataset B