2 code implementations • 9 Sep 2020 • Swetha Mandava, Szymon Migacz, Alex Fit Florea
Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning.
Ranked #1 on Language Modelling on enwiki8
no code implementations • 10 Aug 2020 • Hoo-chang Shin, Alvin Ihsani, Swetha Mandava, Sharath Turuvekere Sreenivas, Christopher Forster, Jiook Cha, Alzheimer's Disease Neuroimaging Initiative
Synthesizing medical images, such as PET, is a challenging task due to the fact that the intensity range is much wider and denser than those in photographs and digital renderings and are often heavily biased toward zero.
no code implementations • 10 Aug 2020 • Hoo-chang Shin, Alvin Ihsani, Ziyue Xu, Swetha Mandava, Sharath Turuvekere Sreenivas, Christopher Forster, Jiook Cha, Alzheimer's Disease Neuroimaging Initiative
This paper proposes an alternative approach to the aforementioned, where AD diagnosis is incorporated in the GAN training objective to achieve the best AD classification performance.