no code implementations • 16 Mar 2024 • Abu Zahid Bin Aziz, Mokshagna Sai Teja Karanam, Tushar Kataria, Shireen Y. Elhabian
Secondly, feature similarities across attention heads that were recently found in multi-head attention architectures indicate a significant computational redundancy, suggesting that the capacity of the network could be better utilized to enhance performance.
Ranked #1 on Medical Image Registration on OASIS (val dsc metric)
no code implementations • 6 Jul 2023 • Mokshagna Sai Teja Karanam, Tushar Kataria, Krithika Iyer, Shireen Elhabian
However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit image-based texture bias resulting in sub-optimal models.