no code implementations • 20 Dec 2023 • Murad Tukan, Fares Fares, Yotam Grufinkle, Ido Talmor, Loay Mualem, Vladimir Braverman, Dan Feldman
In response to this formidable challenge, we introduce a real-time autonomous indoor exploration system tailored for drones equipped with a monocular \emph{RGB} camera.
1 code implementation • 9 Mar 2023 • Murad Tukan, Samson Zhou, Alaa Maalouf, Daniela Rus, Vladimir Braverman, Dan Feldman
In this paper, we introduce the first algorithm to construct coresets for \emph{RBFNNs}, i. e., small weighted subsets that approximate the loss of the input data on any radial basis function network and thus approximate any function defined by an \emph{RBFNN} on the larger input data.
1 code implementation • 21 Sep 2022 • Alaa Maalouf, Yotam Gurfinkel, Barak Diker, Oren Gal, Daniela Rus, Dan Feldman
We suggest the first system that runs real-time semantic segmentation via deep learning on a weak micro-computer such as the Raspberry Pi Zero v2 (whose price was \$15) attached to a toy-drone.
no code implementations • 8 Mar 2022 • Murad Tukan, Alaa Maalouf, Dan Feldman, Roi Poranne
While this approach is very simple, it can become costly when the obstacles are unknown, since samples hitting these obstacles are wasted.
1 code implementation • 8 Mar 2022 • Murad Tukan, Xuan Wu, Samson Zhou, Vladimir Braverman, Dan Feldman
$(j, k)$-projective clustering is the natural generalization of the family of $k$-clustering and $j$-subspace clustering problems.
no code implementations • 6 Mar 2022 • Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman
The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.
no code implementations • 5 Mar 2022 • Ibrahim Jubran, Fares Fares, Yuval Alfassi, Firas Ayoub, Dan Feldman
The Perspective-n-Point problem aims to estimate the relative pose between a calibrated monocular camera and a known 3D model, by aligning pairs of 2D captured image points to their corresponding 3D points in the model.
no code implementations • 4 Nov 2021 • Alaa Maalouf, Gilad Eini, Ben Mussay, Dan Feldman, Margarita Osadchy
Our approach offers a new definition of coreset, which is a natural relaxation of the standard definition and aims at approximating the \emph{average} loss of the original data over the queries.
no code implementations • 4 Nov 2021 • Alaa Maalouf, Ibrahim Jubran, Dan Feldman
The survey may help guide new researchers unfamiliar with the field, and introduce them to the very basic foundations of coresets, through a simple, yet fundamental, problem.
1 code implementation • NeurIPS 2021 • Ibrahim Jubran, Ernesto Evgeniy Sanches Shayda, Ilan Newman, Dan Feldman
Its regression or classification loss to a given matrix $D$ of $N$ entries (labels) is the sum of squared differences over every label in $D$ and its assigned label by $t$.
2 code implementations • NeurIPS 2021 • Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, Daniela Rus
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression.
no code implementations • 6 Apr 2021 • Cenk Baykal, Lucas Liebenwein, Dan Feldman, Daniela Rus
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i. e., active learning).
no code implementations • 10 Jan 2021 • Ibrahim Jubran, Alaa Maalouf, Ron Kimmel, Dan Feldman
A harder version is the \emph{registration problem}, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from $P$ to $Q$.
no code implementations • ICCV 2021 • Ibrahim Jubran, Alaa Maalouf, Ron Kimmel, Dan Feldman
A harder version is the registration problem, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from P to Q. Algorithms such as the Iterative Closest Point (ICP) and its variants were suggested for these problems, but none yield a provable non-trivial approximation for the global optimum.
no code implementations • 18 Nov 2020 • Dan Feldman
In optimization or machine learning problems we are given a set of items, usually points in some metric space, and the goal is to minimize or maximize an objective function over some space of candidate solutions.
no code implementations • ICLR 2021 • Alaa Maalouf, Harry Lang, Daniela Rus, Dan Feldman
Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of $k$ small layers that operate in parallel and are then recombined with a single fully-connected layer.
no code implementations • 11 Sep 2020 • Murad Tukan, Alaa Maalouf, Matan Weksler, Dan Feldman
Here, $d$ is the number of the neurons in the layer, $n$ is the number in the next one, and $A_{k, 2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$.
no code implementations • 9 Jun 2020 • Alaa Maalouf, Ibrahim Jubran, Murad Tukan, Dan Feldman
PAC-learning usually aims to compute a small subset ($\varepsilon$-sample/net) from $n$ items, that provably approximates a given loss function for every query (model, classifier, hypothesis) from a given set of queries, up to an additive error $\varepsilon\in(0, 1)$.
no code implementations • NeurIPS 2020 • Murad Tukan, Alaa Maalouf, Dan Feldman
Coreset is usually a small weighted subset of $n$ input points in $\mathbb{R}^d$, that provably approximates their loss function for a given set of queries (models, classifiers, etc.).
no code implementations • ICML 2020 • Ibrahim Jubran, Murad Tukan, Alaa Maalouf, Dan Feldman
The input to the \emph{sets-$k$-means} problem is an integer $k\geq 1$ and a set $\mathcal{P}=\{P_1,\cdots, P_n\}$ of sets in $\mathbb{R}^d$.
no code implementations • 15 Feb 2020 • Murad Tukan, Cenk Baykal, Dan Feldman, Daniela Rus
A coreset is a small, representative subset of the original data points such that a models trained on the coreset are provably competitive with those trained on the original data set.
2 code implementations • ICLR 2020 • Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, Daniela Rus
We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network.
no code implementations • 19 Oct 2019 • Ibrahim Jubran, Alaa Maalouf, Dan Feldman
A coreset (or core-set) of an input set is its small summation, such that solving a problem on the coreset as its input, provably yields the same result as solving the same problem on the original (full) set, for a given family of problems (models, classifiers, loss functions).
2 code implementations • 11 Oct 2019 • Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy.
no code implementations • ICLR 2020 • Ben Mussay, Margarita Osadchy, Vladimir Braverman, Samson Zhou, Dan Feldman
We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample.
no code implementations • 2 Jul 2019 • Alaa Maalouf, Adiel Statman, Dan Feldman
With high probability, non-uniform sampling based on upper bounds on what is known as importance or sensitivity of each row in $A$ yields a coreset.
no code implementations • 12 Jun 2019 • Dan Feldman, Zahi Kfir, Xuan Wu
For example, for any input set $D$ whose coordinates are integers in $[-n^{100}, n^{100}]$ and any fixed $k, d\geq 1$, the coreset size is $(\log n)^{O(1)}/\varepsilon^2$, and can be computed in time near-linear in $n$, with high probability.
1 code implementation • NeurIPS 2019 • Alaa Maalouf, Ibrahim Jubran, Dan Feldman
Least-mean squares (LMS) solvers such as Linear / Ridge / Lasso-Regression, SVD and Elastic-Net not only solve fundamental machine learning problems, but are also the building blocks in a variety of other methods, such as decision trees and matrix factorizations.
no code implementations • 27 Feb 2019 • Ibrahim Jubran, David Cohn, Dan Feldman
The $\ell_p$ linear regression problem is to minimize $f(x)=||Ax-b||_p$ over $x\in\mathbb{R}^d$, where $A\in\mathbb{R}^{n\times d}$, $b\in \mathbb{R}^n$, and $p>0$.
no code implementations • 2 Jan 2019 • Eitan Netzer, Alex Frid, Dan Feldman
We suggest an algorithm that maintains the representation such coreset tailored to handle the EEG signal which enables: (i) real time and continuous computation of the Common Spatial Pattern (CSP) feature extraction method on a coreset representation of the signal (instead on the signal itself) , (ii) improvement of the CSP algorithm efficiency with provable guarantees by applying CSP algorithm on the coreset, and (iii) real time addition of the data trials (EEG data windows) to the coreset.
no code implementations • 23 Jul 2018 • Ibrahim Jubran, Dan Feldman
This problem is non-trivial even if $z=1$ and the matching $\pi$ is given.
no code implementations • ICLR 2019 • Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output.
no code implementations • 21 Feb 2018 • Elad Tolochinsky, Ibrahim Jubran, Dan Feldman
Coreset (or core-set) is a small weighted \emph{subset} $Q$ of an input set $P$ with respect to a given \emph{monotonic} function $f:\mathbb{R}\to\mathbb{R}$ that \emph{provably} approximates its fitting loss $\sum_{p\in P}f(p\cdot x)$ to \emph{any} given $x\in\mathbb{R}^d$.
no code implementations • ICLR 2018 • Cenk Baykal, Murad Tukan, Dan Feldman, Daniela Rus
Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis.
no code implementations • ICML 2017 • Dan Feldman, Sedat Ozer, Daniela Rus
We provide a deterministic data summarization algorithm that approximates the mean $\bar{p}=\frac{1}{n}\sum_{p\in P} p$ of a set $P$ of $n$ vectors in $\REAL^d$, by a weighted mean $\tilde{p}$ of a \emph{subset} of $O(1/\eps)$ vectors, i. e., independent of both $n$ and $d$.
no code implementations • 23 Mar 2017 • Mario Lucic, Matthew Faulkner, Andreas Krause, Dan Feldman
In this work we show how to construct coresets for mixtures of Gaussians.
no code implementations • NeurIPS 2016 • Dan Feldman, Mikhail Volkov, Daniela Rus
An open practical problem has been to compute a non-trivial approximation to the PCA of very large but sparse databases such as the Wikipedia document-term matrix in a reasonable time.
no code implementations • 30 Nov 2015 • Soliman Nasser, Ibrahim Jubran, Dan Feldman
By maintaining such a coreset for kinematic (moving) set of $n$ points, we can run pose-estimation algorithms, such as Kabsch or PnP, on the small coresets, instead of the $n$ points, in real-time using weak devices, while obtaining the same results.
no code implementations • NeurIPS 2014 • Guy Rosman, Mikhail Volkov, Dan Feldman, John W. Fisher III, Daniela Rus
We consider the problem of computing optimal segmentation of such signals by k-piecewise linear function, using only one pass over the data by maintaining a coreset for the signal.
no code implementations • NeurIPS 2011 • Dan Feldman, Matthew Faulkner, Andreas Krause
In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations.
no code implementations • 7 Jun 2011 • Dan Feldman, Michael Langberg
In the $k$-clustering variant, each $x\in X$ is a tuple of $k$ shapes, and $f(x)$ is the distance from $p$ to its closest shape in $x$.