1 code implementation • 20 Oct 2023 • Arijit Sehanobish, Krzysztof Choromanski, Yunfan Zhao, Avinava Dubey, Valerii Likhosherstov
We introduce the concept of scalable neural network kernels (SNNKs), the replacements of regular feedforward layers (FFLs), capable of approximating the latter, but with favorable computational properties.
1 code implementation • 2 Feb 2023 • Krzysztof Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii Likhosherstov, Somnath Basu Roy Chowdhury, Avinava Dubey, Deepali Jain, Tamas Sarlos, Snigdha Chaturvedi, Adrian Weller
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
no code implementations • 22 Oct 2022 • Arijit Sehanobish, Kawshik Kannan, Nabila Abraham, Anasuya Das, Benjamin Odry
Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP).
no code implementations • NAACL (ACL) 2022 • Arijit Sehanobish, McCullen Sandora, Nabila Abraham, Jayashri Pawar, Danielle Torres, Anasuya Das, Murray Becker, Richard Herzog, Benjamin Odry, Ron Vianu
Pretrained Transformer based models finetuned on domain specific corpora have changed the landscape of NLP.
no code implementations • 9 Apr 2022 • Arijit Sehanobish, Nathaniel Brown, Ishita Daga, Jayashri Pawar, Danielle Torres, Anasuya Das, Murray Becker, Richard Herzog, Benjamin Odry, Ron Vianu
Our work opens the scope of using our method to radiologist's reports on various body parts.
1 code implementation • ICLR 2022 • Krzysztof Choromanski, Haoxian Chen, Han Lin, Yuanzhe Ma, Arijit Sehanobish, Deepali Jain, Michael S Ryoo, Jake Varley, Andy Zeng, Valerii Likhosherstov, Dmitry Kalashnikov, Vikas Sindhwani, Adrian Weller
We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs) that automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest.
no code implementations • 28 Sep 2021 • Onur Kara, Arijit Sehanobish, Hector H Corzo
Transformers are state-of-the-art deep learning models that are composed of stacked attention and point-wise, fully connected layers designed for handling sequential data.
1 code implementation • 16 Jul 2021 • Krzysztof Choromanski, Han Lin, Haoxian Chen, Tianyi Zhang, Arijit Sehanobish, Valerii Likhosherstov, Jack Parker-Holder, Tamas Sarlos, Adrian Weller, Thomas Weingarten
In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way.
1 code implementation • 8 Jun 2021 • Hector H. Corzo, Arijit Sehanobish, Onur Kara
In this report, we present a deep learning framework termed the Electron Correlation Potential Neural Network (eCPNN) that can learn succinct and compact potential functions.
1 code implementation • NeurIPS Workshop TDA_and_Beyond 2020 • Arijit Sehanobish, Neal Ravindra, David van Dijk
In this work, we use a permutation invariant network to map samples from probability measures into a low-dimensional space such that the Euclidean distance between the encoded samples reflects the Wasserstein distance between probability measures.
1 code implementation • 23 Jun 2020 • Arijit Sehanobish, Neal G. Ravindra, David van Dijk
In recent years, there has been a lot of work incorporating edge features along with node features for prediction tasks.
1 code implementation • 23 Jun 2020 • Arijit Sehanobish, Neal G. Ravindra, David van Dijk
A molecular and cellular understanding of how SARS-CoV-2 variably infects and causes severe COVID-19 remains a bottleneck in developing interventions to end the pandemic.
Explainable Artificial Intelligence (XAI) General Classification +3
1 code implementation • 23 Jun 2020 • Arijit Sehanobish, Hector H. Corzo, Onur Kara, David van Dijk
Attempts to apply Neural Networks (NN) to a wide range of research problems have been ubiquitous and plentiful in recent literature.
1 code implementation • 14 Feb 2020 • Neal G. Ravindra, Arijit Sehanobish, Jenna L. Pappalardo, David A. Hafler, David van Dijk
To the best of our knowledge, this is the first effort to use graph attention, and deep learning in general, to predict disease state from single-cell data.
1 code implementation • 22 Sep 2019 • Arijit Sehanobish, Chan Hee Song
In this paper for Chinese NER systems, we do not use these traditional features but we use lexicographic features of Chinese characters.