no code implementations • 22 Feb 2024 • Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang
The distinguishing power of graph transformers is closely tied to the choice of positional encoding: features used to augment the base transformer with information about the graph.
no code implementations • 13 Feb 2024 • Gal Mishne, Adam Charles
Optical imaging of the brain has expanded dramatically in the past two decades.
no code implementations • 12 Feb 2024 • Changhao Shi, Gal Mishne
We establish statistical consistency for the penalized maximum likelihood estimation (MLE) of a Cartesian product Laplacian, and propose an efficient algorithm to solve the problem.
no code implementations • 21 Dec 2023 • Ram Dyuthi Sristi, Ofir Lindenbaum, Maria Lavzin, Jackie Schiller, Gal Mishne, Hadas Benisty
We study the problem of contextual feature selection, where the goal is to learn a predictive function while identifying subsets of informative features conditioned on specific contexts.
no code implementations • 31 Jul 2023 • Chester Holtz, PengWen Chen, Alexander Cloninger, Chung-Kuan Cheng, Gal Mishne
Motivated by the need to address the degeneracy of canonical Laplace learning algorithms in low label rates, we propose to reformulate graph-based semi-supervised learning as a nonconvex generalization of a \emph{Trust-Region Subproblem} (TRS).
no code implementations • 14 Jun 2023 • Changhao Shi, Gal Mishne
A common challenge in applying graph machine learning methods is that the underlying graph of a system is often unknown.
1 code implementation • 7 Jun 2023 • Noga Mudrik, Gal Mishne, Adam S. Charles
Time series data across scientific domains are often collected under distinct states (e. g., tasks), wherein latent processes (e. g., biological factors) create complex inter- and intra-state variability.
1 code implementation • 30 May 2023 • Ya-Wei Eileen Lin, Ronald R. Coifman, Gal Mishne, Ronen Talmon
Finding meaningful representations and distances of hierarchical data is important in many fields.
1 code implementation • 10 Nov 2022 • Ram Dyuthi Sristi, Gal Mishne, Ariel Jaffe
Selecting subsets of features that differentiate between two conditions is a key task in a broad range of scientific domains.
1 code implementation • 7 Nov 2022 • Xinyue Xia, Gal Mishne, Yusu Wang
We also show that our model is suitable for graph representation learning and graph generation.
1 code implementation • 31 Oct 2022 • Gal Mishne, Zhengchao Wan, Yusu Wang, Sheng Yang
Given the exponential growth of the volume of the ball w. r. t.
no code implementations • 20 Oct 2022 • Chester Holtz, Tsui-Wei Weng, Gal Mishne
There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy.
no code implementations • 4 Oct 2022 • Chester Holtz, Gal Mishne, Alexander Cloninger
Probabilistic generative models provide a flexible and systematic framework for learning the underlying geometry of data.
no code implementations • 10 Jan 2022 • Hadas Benisty, Alexander Song, Gal Mishne, Adam S. Charles
Functional optical imaging in neuroscience is rapidly growing with the development of new optical systems and fluorescence indicators.
1 code implementation • NeurIPS 2021 • Changhao Shi, Sivan Schwartz, Shahar Levy, Shay Achvat, Maisan Abboud, Amir Ghanayim, Jackie Schiller, Gal Mishne
To understand the relationship between behavior and neural activity, experiments in neuroscience often include an animal performing a repeated behavior such as a motor task.
no code implementations • 29 Sep 2021 • Chester Holtz, Tsui-Wei Weng, Gal Mishne
There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy.
1 code implementation • ICLR Workshop GTRL 2021 • Dhruv Kohli, Alexander Cloninger, Gal Mishne
We present Low Distortion Local Eigenmaps (LDLE), a manifold learning technique which constructs a set of low distortion local views of a dataset in lower dimension and registers them to obtain a global embedding.
no code implementations • 23 Jan 2021 • Changhao Shi, Chester Holtz, Gal Mishne
To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.
no code implementations • ICLR 2021 • Changhao Shi, Chester Holtz, Gal Mishne
Deep neural networks are known to be vulnerable to adversarial examples, where a perturbation in the input space leads to an amplified shift in the latent network representation.
no code implementations • 1 Jan 2021 • Chester Holtz, Changhao Shi, Gal Mishne
Recent work has demonstrated that neural networks are vulnerable to small, adversarial perturbations of their input.
no code implementations • 9 Sep 2020 • Ofir Lindenbaum, Amir Sagiv, Gal Mishne, Ronen Talmon
A low-dimensional dynamical system is observed in an experiment as a high-dimensional signal; for example, a video of a chaotic pendulums system.
no code implementations • 30 Jun 2020 • Jay S. Stanley III, Eric C. Chi, Gal Mishne
Graph signal processing (GSP) is an important methodology for studying data residing on irregular structures.
1 code implementation • NeurIPS 2019 • Scott Gigante, Adam S. Charles, Smita Krishnaswamy, Gal Mishne
We demonstrate M-PHATE with two vignettes: continual learning and generalization.
1 code implementation • 25 Oct 2018 • Xiuyuan Cheng, Gal Mishne
The extraction of clusters from a dataset which includes multiple clusters and a significant background component is a non-trivial task of practical importance.
no code implementations • 16 Oct 2018 • Gal Mishne, Eric C. Chi, Ronald R. Coifman
We propose utilizing this coupled structure to perform co-manifold learning: uncovering the underlying geometry of both the rows and the columns of a given matrix, where we focus on a missing data setting.
3 code implementations • 13 Nov 2017 • George C. Linderman, Gal Mishne, Yuval Kluger, Stefan Steinerberger
If we pick $n$ random points uniformly in $[0, 1]^d$ and connect each point to its $k-$nearest neighbors, then it is well known that there exists a giant connected component with high probability.
1 code implementation • 18 Aug 2017 • Gal Mishne, Ronen Talmon, Israel Cohen, Ronald R. Coifman, Yuval Kluger
Often the data is such that the observations do not reside on a regular grid, and the given order of the features is arbitrary and does not convey a notion of locality.
no code implementations • 5 Jun 2017 • Xiuyuan Cheng, Gal Mishne, Stefan Steinerberger
Let $(M, g)$ be a compact manifold and let $-\Delta \phi_k = \lambda_k \phi_k$ be the sequence of Laplacian eigenfunctions.
no code implementations • 6 Nov 2015 • Gal Mishne, Ronen Talmon, Ron Meir, Jackie Schiller, Uri Dubin, Ronald R. Coifman
In the wake of recent advances in experimental methods in neuroscience, the ability to record in-vivo neuronal activity from awake animals has become feasible.
no code implementations • 25 Jun 2015 • Gal Mishne, Uri Shaham, Alexander Cloninger, Israel Cohen
In this paper, we propose a manifold learning algorithm based on deep learning to create an encoder, which maps a high-dimensional dataset and its low-dimensional embedding, and a decoder, which takes the embedded data back to the high-dimensional space.