1 code implementation • 24 Oct 2023 • Subhojeet Pramanik, Esraa Elelimy, Marlos C. Machado, Adam White
In this paper we introduce recurrent alternatives to the transformer self-attention mechanism that offer a context-independent inference cost, leverage long-range dependencies effectively, and perform well in practice.
no code implementations • 30 Sep 2020 • Subhojeet Pramanik, Shashank Mujumdar, Hima Patel
Recent approaches in literature have exploited the multi-modal information in documents (text, layout, image) to serve specific downstream document tasks.
2 code implementations • 17 Jul 2019 • Subhojeet Pramanik, Priyanka Agrawal, Aman Hussain
We also show that using this neural network pre-trained on some modalities assists in learning unseen tasks such as video captioning and video question answering.
1 code implementation • 31 May 2018 • Subhojeet Pramanik, Aman Hussain
We perform text normalization, i. e. the transformation of words from the written to the spoken form, using a memory augmented neural network.