Search Results for author: Arijit Sehanobish

Found 15 papers, 11 papers with code

Scalable Neural Network Kernels

1 code implementation20 Oct 2023 Arijit Sehanobish, Krzysztof Choromanski, Yunfan Zhao, Avinava Dubey, Valerii Likhosherstov

We introduce the concept of scalable neural network kernels (SNNKs), the replacements of regular feedforward layers (FFLs), capable of approximating the latter, but with favorable computational properties.

Hybrid Random Features

1 code implementation ICLR 2022 Krzysztof Choromanski, Haoxian Chen, Han Lin, Yuanzhe Ma, Arijit Sehanobish, Deepali Jain, Michael S Ryoo, Jake Varley, Andy Zeng, Valerii Likhosherstov, Dmitry Kalashnikov, Vikas Sindhwani, Adrian Weller

We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs) that automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest.

Benchmarking

Fine-tuning Vision Transformers for the Prediction of State Variables in Ising Models

no code implementations28 Sep 2021 Onur Kara, Arijit Sehanobish, Hector H Corzo

Transformers are state-of-the-art deep learning models that are composed of stacked attention and point-wise, fully connected layers designed for handling sequential data.

From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers

1 code implementation16 Jul 2021 Krzysztof Choromanski, Han Lin, Haoxian Chen, Tianyi Zhang, Arijit Sehanobish, Valerii Likhosherstov, Jack Parker-Holder, Tamas Sarlos, Adrian Weller, Thomas Weingarten

In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way.

Graph Attention

Learning Full Configuration Interaction Electron Correlations with Deep Learning

1 code implementation8 Jun 2021 Hector H. Corzo, Arijit Sehanobish, Onur Kara

In this report, we present a deep learning framework termed the Electron Correlation Potential Neural Network (eCPNN) that can learn succinct and compact potential functions.

Permutation invariant networks to learn Wasserstein metrics

1 code implementation NeurIPS Workshop TDA_and_Beyond 2020 Arijit Sehanobish, Neal Ravindra, David van Dijk

In this work, we use a permutation invariant network to map samples from probability measures into a low-dimensional space such that the Euclidean distance between the encoded samples reflects the Wasserstein distance between probability measures.

Self-supervised edge features for improved Graph Neural Network training

1 code implementation23 Jun 2020 Arijit Sehanobish, Neal G. Ravindra, David van Dijk

In recent years, there has been a lot of work incorporating edge features along with node features for prediction tasks.

General Classification Graph Attention +2

Gaining Insight into SARS-CoV-2 Infection and COVID-19 Severity Using Self-supervised Edge Features and Graph Neural Networks

1 code implementation23 Jun 2020 Arijit Sehanobish, Neal G. Ravindra, David van Dijk

A molecular and cellular understanding of how SARS-CoV-2 variably infects and causes severe COVID-19 remains a bottleneck in developing interventions to end the pandemic.

Explainable Artificial Intelligence (XAI) General Classification +3

Learning Potentials of Quantum Systems using Deep Neural Networks

1 code implementation23 Jun 2020 Arijit Sehanobish, Hector H. Corzo, Onur Kara, David van Dijk

Attempts to apply Neural Networks (NN) to a wide range of research problems have been ubiquitous and plentiful in recent literature.

Disease State Prediction From Single-Cell Data Using Graph Attention Networks

1 code implementation14 Feb 2020 Neal G. Ravindra, Arijit Sehanobish, Jenna L. Pappalardo, David A. Hafler, David van Dijk

To the best of our knowledge, this is the first effort to use graph attention, and deep learning in general, to predict disease state from single-cell data.

Disease Prediction Graph Attention +1

Using Chinese Glyphs for Named Entity Recognition

1 code implementation22 Sep 2019 Arijit Sehanobish, Chan Hee Song

In this paper for Chinese NER systems, we do not use these traditional features but we use lexicographic features of Chinese characters.

named-entity-recognition Named Entity Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.