no code implementations • 26 Feb 2024 • Ran Eisenberg, Jonathan Svirsky, Ofir Lindenbaum
Fusing information from different modalities can enhance data analysis tasks, including clustering.
no code implementations • 21 Dec 2023 • Ram Dyuthi Sristi, Ofir Lindenbaum, Maria Lavzin, Jackie Schiller, Gal Mishne, Hadas Benisty
We study the problem of contextual feature selection, where the goal is to learn a predictive function while identifying subsets of informative features conditioned on specific contexts.
no code implementations • 20 Dec 2023 • Amit Rozner, Barak Battash, Ofir Lindenbaum, Lior Wolf
We study the problem of performing face verification with an efficient neural model $f$.
no code implementations • 20 Dec 2023 • Erez Peterfreund, Iryna Burak, Ofir Lindenbaum, Jim Gimlett, Felix Dietrich, Ronald R. Coifman, Ioannis G. Kevrekidis
Fusing measurements from multiple, heterogeneous, partial sources, observing a common object or process, poses challenges due to the increasing availability of numbers and types of sensors.
no code implementations • 23 Jul 2023 • Guy Zamberg, Moshe Salhov, Ofir Lindenbaum, Amir Averbuch
Tables are an abundant form of data with use cases across all scientific fields.
no code implementations • 7 Jun 2023 • Jonathan Svirsky, Ofir Lindenbaum
Furthermore, we verify that our model leads to interpretable results at a sample and cluster level.
no code implementations • 1 Jun 2023 • Ofek Ophir, Orit Shefi, Ofir Lindenbaum
First, we classify neuronal cell types of mice data to identify excitatory and inhibitory neurons.
no code implementations • 1 Jun 2023 • Amit Rozner, Barak Battash, Henry Li, Lior Wolf, Ofir Lindenbaum
Then, we design a variance stabilized density estimation problem for maximizing the likelihood of the observed samples while minimizing the variance of the density around normal samples.
1 code implementation • 16 Mar 2023 • Junchen Yang, Ofir Lindenbaum, Yuval Kluger, Ariel Jaffe
Multi-modal high throughput biological data presents a great scientific opportunity and a significant computational challenge.
no code implementations • 5 Mar 2023 • Barak Battash, Ofir Lindenbaum
Following the central limit theorem, SGN was initially modeled as Gaussian, and lately, it has been suggested that stochastic gradient noise is better characterized using $S\alpha S$ L\'evy distribution.
1 code implementation • 31 Jan 2023 • Amit Rozner, Barak Battash, Lior Wolf, Ofir Lindenbaum
This work generalizes the problem of unsupervised domain generalization to the case in which no labeled samples are available (completely unsupervised).
no code implementations • 1 Jan 2023 • Idan Cohen, Ofir Lindenbaum, Sharon Gannot
Classical methods for acoustic scene mapping require the estimation of time difference of arrival (TDOA) between microphones.
1 code implementation • 28 Oct 2022 • Jonathan Svirsky, Ofir Lindenbaum
Our key idea is to model VAD as a denoising task, and construct a network that is designed to identify nuisance features for a speech classification task.
Ranked #3 on Activity Detection on AVA-Speech (ROC-AUC metric)
no code implementations • 19 Apr 2022 • Jonathan Gradstein, Moshe Salhov, Yoav Tulpan, Ofir Lindenbaum, Amir Averbuch
When presented with a binary classification problem where the data exhibits severe class imbalance, most standard predictive methods may fail to accurately model the minority class.
no code implementations • 21 Feb 2022 • Bronislav Yasinnik, Moshe Salhov, Ofir Lindenbaum, Amir Averbuch
Learning from imbalanced data is one of the most significant challenges in real-world classification tasks.
1 code implementation • 29 Oct 2021 • Soham Jana, Henry Li, Yutaro Yamada, Ofir Lindenbaum
Consider the problem of simultaneous estimation and support recovery of the coefficient vector in a linear data model with additive Gaussian noise.
1 code implementation • 11 Oct 2021 • Uri Shaham, Ofir Lindenbaum, Jonathan Svirsky, Yuval Kluger
Experimenting on several real-world datasets, we demonstrate that our proposed approach outperforms similar approaches designed to avoid only correlated or nuisance features, but not both.
no code implementations • 1 Oct 2021 • Ofir Lindenbaum, Yariv Aizenbud, Yuval Kluger
We first present the Robust AutoEncoder (RAE) objective as a minimization problem for splitting the data into inliers and outliers.
no code implementations • ICLR 2022 • Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger
We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets.
1 code implementation • 11 Jun 2021 • Junchen Yang, Ofir Lindenbaum, Yuval Kluger
By forcing the model to select a subset of the most informative features for each sample, we reduce model overfitting in low-sample-size data and obtain an interpretable model.
1 code implementation • 12 Oct 2020 • Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger
We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets.
no code implementations • 28 Sep 2020 • Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger
The proposed procedure learns two non-linear transformations and simultaneously gates the input variables to identify a subset of most correlated variables.
no code implementations • 9 Sep 2020 • Ofir Lindenbaum, Amir Sagiv, Gal Mishne, Ronen Talmon
A low-dimensional dynamical system is observed in an experiment as a high-dimensional signal; for example, a video of a chaotic pendulums system.
1 code implementation • NeurIPS 2021 • Ofir Lindenbaum, Uri Shaham, Jonathan Svirsky, Erez Peterfreund, Yuval Kluger
In this paper, we present a method for unsupervised feature selection, and we demonstrate its use for the task of clustering.
no code implementations • 15 Apr 2020 • Erez Peterfreund, Ofir Lindenbaum, Felix Dietrich, Tom Bertalan, Matan Gavish, Ioannis G. Kevrekidis, Ronald R. Coifman
We propose a deep-learning based method for obtaining standardized data coordinates from scientific measurements. Data observations are modeled as samples from an unknown, non-linear deformation of an underlying Riemannian manifold, which is parametrized by a few normalized latent variables.
no code implementations • 27 Feb 2020 • Ariel Jaffe, Yuval Kluger, Ofir Lindenbaum, Jonathan Patsenker, Erez Peterfreund, Stefan Steinerberger
word2vec due to Mikolov \textit{et al.} (2013) is a word embedding method that is widely used in natural language processing.
1 code implementation • ECCV 2020 • Henry Li, Ofir Lindenbaum, Xiuyuan Cheng, Alexander Cloninger
Variational autoencoders (VAEs) and generative adversarial networks (GANs) enjoy an intuitive connection to manifold learning: in training the decoder/generator is optimized to approximate a homeomorphism between the data distribution and the sampling space.
no code implementations • NeurIPS 2018 • Ofir Lindenbaum, Jay Stanley, Guy Wolf, Smita Krishnaswamy
We propose a new type of generative model for high-dimensional data that learns a manifold geometry of the data, rather than density, and can generate points evenly along this manifold.
1 code implementation • ICML 2020 • Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, Yuval Kluger
Feature selection problems have been extensively studied for linear estimation, for instance, Lasso, but less emphasis has been placed on feature selection for non-linear functions.
1 code implementation • 14 Feb 2018 • Ofir Lindenbaum, Jay S. Stanley III, Guy Wolf, Smita Krishnaswamy
Then, it generates new points evenly along the manifold by pulling randomly generated points into its intrinsic structure using a diffusion kernel.
no code implementations • 4 Jul 2017 • Ofir Lindenbaum, Moshe Salhov, Arie Yeredor, Amir Averbuch
We propose to set a scale parameter that is tailored to one of two types of tasks: classification and manifold learning.
no code implementations • 6 Jun 2017 • Ofir Lindenbaum, Yuri Bregman, Neta Rabin, Amir Averbuch
The problem of learning from seismic recordings has been studied for years.
no code implementations • 28 Jun 2016 • Moshe Salhov, Ofir Lindenbaum, Yariv Aizenbud, Avi Silberschatz, Yoel Shkolnisky, Amir Averbuch
Data analysis methods aim to uncover the underlying low dimensional structure imposed by the low dimensional hidden parameters by utilizing distance metrics that consider the set of attributes as a single monolithic set.
no code implementations • 23 Aug 2015 • Ofir Lindenbaum, Arie Yeredor, Moshe Salhov, Amir Averbuch
The multi-view dimensionality reduction is achieved by defining a cross-view model in which an implied random walk process is restrained to hop between objects in the different views.