Search Results for author: Jashn Arora

Found 4 papers, 0 papers with code

Multi-view and Cross-view Brain Decoding

no code implementations COLING 2022 Subba Reddy Oota, Jashn Arora, Manish Gupta, Raju S. Bapi

(2) Our extensive analysis across 9 broad regions, 11 language sub-regions and 16 visual sub-regions of the brain help us localize, for the first time, the parts of the brain involved in cross-view tasks like image captioning, image tagging, sentence formation and keyword extraction.

Brain Decoding Image Captioning +2

Cross-view Brain Decoding

no code implementations18 Apr 2022 Subba Reddy Oota, Jashn Arora, Manish Gupta, Raju S. Bapi

Also, the decoded representations are sufficiently detailed to enable high accuracy for cross-view-translation tasks with following pairwise accuracy: IC (78. 0), IT (83. 0), KE (83. 7) and SF (74. 5).

Brain Decoding Image Captioning +4

Visio-Linguistic Brain Encoding

no code implementations COLING 2022 Subba Reddy Oota, Jashn Arora, Vijay Rowtula, Manish Gupta, Raju S. Bapi

In this paper, we systematically explore the efficacy of image Transformers (ViT, DEiT, and BEiT) and multi-modal Transformers (VisualBERT, LXMERT, and CLIP) for brain encoding.

Cannot find the paper you are looking for? You can Submit a new open access paper.