no code implementations • COLING 2022 • Subba Reddy Oota, Jashn Arora, Manish Gupta, Raju S. Bapi
(2) Our extensive analysis across 9 broad regions, 11 language sub-regions and 16 visual sub-regions of the brain help us localize, for the first time, the parts of the brain involved in cross-view tasks like image captioning, image tagging, sentence formation and keyword extraction.
no code implementations • NAACL 2022 • Subba Reddy Oota, Jashn Arora, Veeral Agarwal, Mounika Marreddy, Manish Gupta, Bapi Raju Surampudi
Several popular Transformer based language models have been found to be successful for text-driven brain encoding.
no code implementations • 18 Apr 2022 • Subba Reddy Oota, Jashn Arora, Manish Gupta, Raju S. Bapi
Also, the decoded representations are sufficiently detailed to enable high accuracy for cross-view-translation tasks with following pairwise accuracy: IC (78. 0), IT (83. 0), KE (83. 7) and SF (74. 5).
no code implementations • COLING 2022 • Subba Reddy Oota, Jashn Arora, Vijay Rowtula, Manish Gupta, Raju S. Bapi
In this paper, we systematically explore the efficacy of image Transformers (ViT, DEiT, and BEiT) and multi-modal Transformers (VisualBERT, LXMERT, and CLIP) for brain encoding.